Capsule Networks: An Improvement to Convolutional Networks

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ก.ย. 2024

ความคิดเห็น • 235

  • @UnboxingSve
    @UnboxingSve 6 ปีที่แล้ว +161

    What I can say is just a huge respect to you Siraj. How fast you catch up with new things that is just amazing!

    • @SirajRaval
      @SirajRaval  6 ปีที่แล้ว +10

      Thanks! I really love this stuff so its always fun to study it

    • @whatcani2
      @whatcani2 6 ปีที่แล้ว +4

      I think u have GPU on your brain to speed up learning all those new algo.

    • @randpaul9863
      @randpaul9863 4 ปีที่แล้ว

      pip3 install tensorflow --upgrade

  • @IyamwhoIyam
    @IyamwhoIyam 6 ปีที่แล้ว +3

    Hi Siraj, I've been waiting for this paper. What a pleasant surprise to learn it has been published only few days ago! I have downloaded the paper and will read it over my second, third, and fourth cup of coffee. You've done an excellent job presenting this very complex topic.

  • @snzn3854
    @snzn3854 6 ปีที่แล้ว +6

    About time, I was waiting for him to publish something like this because he keeps mentioning about the many things wrong with backpropagation.

  • @thedevo01
    @thedevo01 6 ปีที่แล้ว

    I am so very grateful for your efforts to deliver all of this information. you're a very good educator.
    the way you explain these complex solutions is very demystifying, easy to understand, and showing it in practice to validate what we came to understand thanks to you, gives a sense of success which is inspiring.
    it hasn't been long since I began watching you (2017 summer) but your passion for discovery and success in uncovering (teaching about) these developments has been an enlightening experience!
    thank you!

  • @bomb3r422
    @bomb3r422 6 ปีที่แล้ว

    I think Capsule network will be game changing. Big ups Siraj , you never fail to amaze !

  • @ladjiab
    @ladjiab 6 ปีที่แล้ว +72

    Wish I had money to support you for all the good work you are doing.
    Thank you

    • @diegoantoniorosariopalomin4977
      @diegoantoniorosariopalomin4977 6 ปีที่แล้ว

      I supported him for months and he never delivered the rewards

    • @diegoantoniorosariopalomin4977
      @diegoantoniorosariopalomin4977 6 ปีที่แล้ว

      If you read the comments on his older videos you will see my asking for the private chat for backers repeatedly

    • @diegoantoniorosariopalomin4977
      @diegoantoniorosariopalomin4977 6 ปีที่แล้ว

      And him giving increasingly vague answers

    • @SirajRaval
      @SirajRaval  6 ปีที่แล้ว +3

      thanks for listening :)

    • @unoqualsiasi7341
      @unoqualsiasi7341 6 ปีที่แล้ว +6

      the rewards are the videos, the code and the knowledge he shares here. Man there are people that play fking video games and they receive thousands of dollars in donations for that. Stop complaining please, this is useful knowledge worth more than a dollar/month.

  • @2xehpa
    @2xehpa 6 ปีที่แล้ว +29

    You are wrong. they did test it on CIFAR10 with less promising results (~10% when SOTA is ~3-4%)). But this is not that important. They clearly state at the paper that this is not suppose to be a fully formed amazing new architecture but:
    "There are many possible ways to implement the general idea of capsules. The aim of this paper is not
    to explore this whole space but to simply show that one fairly straightforward implementation works
    well and that dynamic routing helps."

  • @allennelson1987
    @allennelson1987 4 ปีที่แล้ว +1

    That I didn't understand it is more about me and my experience than it is about his explanation. I didn't understand it, but I have upvoted it anyway because it was interesting.

  • @levinicklas7885
    @levinicklas7885 6 ปีที่แล้ว

    Really love the new video format. Definitely a step up from your old videos! Much easier to follow;I’m learning a lot more!

  •  6 ปีที่แล้ว +2

    Keep it up Siraj. To me you are like "La Mouche du Coche" to me, energizing my will to carry on with the subjects that matters to us. Thank You

  • @yvanscher7555
    @yvanscher7555 6 ปีที่แล้ว +4

    It''s incredible that after so much has been done on a dataset like mnist you can still get state of the art if you come up with something clever. In short a capsule network adds a third dimension to the network shape. cool.

    • @SirajRaval
      @SirajRaval  6 ปีที่แล้ว +1

      great way of putting it! 'a 3rd dimension'

  • @arriva1256
    @arriva1256 6 ปีที่แล้ว

    Its just amazing that Hinton once again revolutionized neural nets or ai I if you want to call it. Incredible guy!

  • @Gannicus99
    @Gannicus99 6 ปีที่แล้ว

    Loving the more serious format (memse dropped) and the good link documentation! This has realy gotten better!

  • @tanmaybhatnagar4849
    @tanmaybhatnagar4849 6 ปีที่แล้ว +4

    Guys just to be clear the image of the Neural Net at 6:44 is not cropped. It is the original image that is in the paper. The publishers themselves published a cropped image by mistake. (I find it quite funny actually)

  • @011azr
    @011azr 6 ปีที่แล้ว

    Dude, you don't have any PhD but it feels like you're an expert in the deep learning field. Thanks for making the concept much easier for me to grasp.

    • @011azr
      @011azr 6 ปีที่แล้ว

      Just stalked your LinkedIn profile, you seem to have a passion for teaching. I still wonder why haven't you pursue a PhD yet? Going to Stanford and doing the project with Andrew Ng in his Google Brain project sounds like so much fun for people like you. Anyway, thanks for the video. Even if you decide to continue your study or doing something else out there, please keep making useful educational videos like this. Thanks :).

    • @g0d182
      @g0d182 6 ปีที่แล้ว

      To have a chance at the core of the Google brain team, you probably need to produce three or more sequences of work that beat some non-trivial state of the art in a huge way.

    • @SirajRaval
      @SirajRaval  6 ปีที่แล้ว +1

      because of time. im full time making content for you guys. and i love it

  • @soumensinha305
    @soumensinha305 6 ปีที่แล้ว +57

    Siraj please make videos on reinforcement learning, as it can serve very good for general purpose intelligence

  • @Piyush2896
    @Piyush2896 6 ปีที่แล้ว +8

    It would be interesting to see the results of a dynamic routing capsule model being attacked by the pixel attacks at 1, 3 or 5 pixels as done in the paper you mentioned and how it fairs against CNNs

  • @EngIlya
    @EngIlya 6 ปีที่แล้ว

    Hey Siraj, Thanks for the video! A note: advantage of CNN over MLP is not the computational complexity, but statistical efficiency - we use "translational symmetry" in the image, teaching the net that e.g. an eye is the top of an image is the same thing as an eye in the bottom of an image.

  • @DF-rd6zv
    @DF-rd6zv 5 ปีที่แล้ว

    Dude great work, your visual descriptions of these structures builds a phenomenal image in my head. And aTF implementation? Siiick.

  • @ebertolo100
    @ebertolo100 6 ปีที่แล้ว

    Only a few words about your video: Amazing! and Thanks so Much for sharing!

  • @AnimSparkStudios
    @AnimSparkStudios 4 ปีที่แล้ว

    The way you teach is really unique

  • @kevinchweya3087
    @kevinchweya3087 6 ปีที่แล้ว +19

    Siraj explains it like it normal 1 + 1 math but then when I get down to understanding the code, the calculus, the math in it, 😭😭😭😭

  • @siarez
    @siarez 6 ปีที่แล้ว +5

    Thanks for the video. I wish you had explained how the capsule network overcomes the shortcomings of a regular CNN.

  • @davidm.johnston8994
    @davidm.johnston8994 6 ปีที่แล้ว

    Thanks, great video man. It's so much better when you are serious!

  • @charlieyou97
    @charlieyou97 6 ปีที่แล้ว

    Siraj, absolutely love your videos and am incredibly impressed with how fast you can get videos out on novel concepts. If you'll allow me one critique, I do think that your videos would benefit if you spoke slower, especially during the sections where you are explaining code. Quite frequently, I slow down that part to .75x so that my brain can absorb the connection between your words and the code I am seeing.
    Keep up the amazing work!

  • @carlosjosejimenezbermudez9255
    @carlosjosejimenezbermudez9255 6 ปีที่แล้ว +1

    Man, I definitely give you props for the change in your video style, it's still you, but its now a lot easier to understand and follow. Quick question, if I may. Do you think capsule based neural networks could be a way to crack down with some of the issues of 3d generation with conv nets?

  • @knexator_
    @knexator_ 6 ปีที่แล้ว +38

    Paper here: arxiv.org/pdf/1710.09829.pdf

    • @ehfo
      @ehfo 6 ปีที่แล้ว

      thanks

    • @y__h
      @y__h 6 ปีที่แล้ว

      You're awesome

    • @SirajRaval
      @SirajRaval  6 ปีที่แล้ว

      good link

  • @hayatitutar8429
    @hayatitutar8429 6 ปีที่แล้ว

    Thanks Siraj. I Think, Capsule Networks is will be very help us in Deep Learning studies.

  • @DoctorKarul
    @DoctorKarul 6 ปีที่แล้ว

    Hit Like when Siraj casually drops that "because [we all know] neural networks are universal function approximators."

  • @sgaseretto
    @sgaseretto 6 ปีที่แล้ว

    Awesome video Siraj, as always! By the way, nice Deepmind shirt

  • @darkhydrastar
    @darkhydrastar 4 หลายเดือนก่อน

    Great video. Well done.

  • @AbeDillon
    @AbeDillon 6 ปีที่แล้ว +15

    Dammit, Hinton! You beat me to this idea!

    • @aigen-journey
      @aigen-journey 6 ปีที่แล้ว +1

      He was talking about capsules for quite some time now. I think it's still not the final solution to equivarianc, but a small step in the right direction.

    • @SirajRaval
      @SirajRaval  6 ปีที่แล้ว +2

      always with the ideas hah

    • @grekogecko
      @grekogecko 6 ปีที่แล้ว

      Hahaha, he had this idea from long time ago but it was until now that Sabour put an effort to materialize it :P

  • @makokal10010
    @makokal10010 6 ปีที่แล้ว

    Not that I don't like the content or anything, but not mentioning the first author at all is absolutely not fair in terms of attribution. This is regardless of who had the original idea. Someone actually did the work to make this paper happen and she deserves credit for that.

  • @Ruhgtfo
    @Ruhgtfo 3 ปีที่แล้ว

    Great explanation

  • @Frankthegravelrider
    @Frankthegravelrider 6 ปีที่แล้ว

    Always at it with those fresh vids!

  • @UsmanAhmed-sq9bl
    @UsmanAhmed-sq9bl 6 ปีที่แล้ว

    Thankyou Siraj for an awesome presentation.

  • @bizzyvinci
    @bizzyvinci 3 ปีที่แล้ว

    You're awesome! Thanks

  • @RishabhSaxena1996
    @RishabhSaxena1996 6 ปีที่แล้ว

    Next up, do one on Progressive Learning and the about how accurate the outputs are for those.

  • @gotel100
    @gotel100 6 ปีที่แล้ว

    geoff hinton is awesome!

  • @RavinderRam
    @RavinderRam 6 ปีที่แล้ว

    awesome as usual

  • @godbennett
    @godbennett 6 ปีที่แล้ว +1

    Hinton's paper sounds quite similar to "Network in Network" by Lin et al, 2013, arXiv.
    "Network in Network" like Hinton's paper:
    1) Captures abstractions in nested neural bundles, and is less susceptible to overfitting than prior works.
    2) Uses "global average pooling", but does so over the classification layer, and not per capsule or neuron bundle as in Hinton's paper.
    arxiv.org/abs/1312.4400

  • @vladimirtchuiev2218
    @vladimirtchuiev2218 6 ปีที่แล้ว

    Some correction to the difference between AlexNet and VGG: The author of AlexNet wasn't a computer science guy so he relied on his sharp intuition a lot. AlexNet, while very significant on it's own right, is very arbitrarily put together. VGG is a network that was made by computer science folks, it is very ordered, has more layers with a consistent layout and is much simpler overall in its structure. VGG is still often used in DL research. Besides, the number of neurons for each layer is 2^x, where x is an integer, suggesting that it corresponds to the number of GPU threads (different versions of VGG for different GPUs).
    Also, it's worth mentioning that GoogleNet doesn't use full connected layers in the end, it's purely convolutional. It's problematic in DL research because it works well but nobody really understands it. ResNet was a very deep network of 152 layers, and by theory shouldn't work at all, but I don't know the exact details.

  • @neilwang9124
    @neilwang9124 6 ปีที่แล้ว

    Hi, siraj, thanks for some precise summarization for concepts. I would recommend that maybe you could set an estimated knowledge level of potential audiences for each video and try to explain your ideas for different levels of audiences to avoid mixing easy and difficult stuff together.

  • @AndrewMelnychuk0seen
    @AndrewMelnychuk0seen 6 ปีที่แล้ว +1

    Damn dude, you are on top of your stuff. I've been anticipating this tech since I did Hinton's Corsera class. Thanks for explaining.

  • @itsSKG
    @itsSKG 6 ปีที่แล้ว

    Siraj is back ❤

  • @jayshah4016
    @jayshah4016 6 ปีที่แล้ว

    Thank you SIRAJ.... Great Video... Can you have a video explaining about YOLO object detection

  • @debarko
    @debarko 6 ปีที่แล้ว

    I have been waiting for this...

  • @durand101
    @durand101 6 ปีที่แล้ว +18

    Siraj, don't you think that scoring better and better on MNIST is a bad target? A 100% accuracy wouldn't make any sense because there are quite a few digits in MNIST which are genuinely ambiguous. Why should new models achieve a rate much higher than what the SOTA is? Shouldn't we move on to more serious baselines?

    • @tonycatman
      @tonycatman 6 ปีที่แล้ว +3

      I've thought a lot about this before, and I've seen some of the digits you are talking about.
      The digits are ambiguous to you (and me), but obviously they aren't ambiguous to the algorithm.
      The question is resolved by finding out whether the classification correctly represents the original intention of the person who wrote the digits, and it is reasonable to assume that their intention is correctly reflected in the 'y' target.
      I've had to come to the realisation over the last 15 years that some of the algorithms I've put together are simply much better at the task I've set them than I ever would be. Not just faster, but more accurate.
      In fact, my current test for when I've perfected an algorithm is when I am repeatedly convinced that the system has gotten it wrong, but on investigation I'm wrong.

    • @poojanpatel2437
      @poojanpatel2437 6 ปีที่แล้ว +2

      CIFAR 10 is also used for the baseline. and CIFAR 10 is much more elegant baseline than MNIST.

    • @011azr
      @011azr 6 ปีที่แล้ว

      Exactly. They need to research on a more useful "intro to deep learning dataset".

    • @SirajRaval
      @SirajRaval  6 ปีที่แล้ว

      yea need moar baselines this was a start

    • @tobiasgehring2462
      @tobiasgehring2462 6 ปีที่แล้ว +2

      Durand D'souza from watching another talk on capsule networks, it seems that "state of the art performance on MNIST" in this case doesn't mean "higher accuracy", but rather "the same accuracy with less supervision". It's not that it's trying to get 100% accuracy, but instead it's getting similar accuracy to previous models, but only requires a fraction of labelled data compared to them.
      This is really helpful because for a more complicated problem, getting a large amount of high quality labelled data can be a real issue, so if we can get similar accuracy with lots of unlabelled data and a small amount of labelled data, that seems like a serious win.

  • @unoqualsiasi7341
    @unoqualsiasi7341 6 ปีที่แล้ว

    Thanks for the interesting video Siraj!

  • @godsobsex
    @godsobsex 6 ปีที่แล้ว

    You're simply just awesome.

  • @shivroh7678
    @shivroh7678 6 ปีที่แล้ว +2

    if Geoffrey Hinton is god than siraj is messenger....!!
    Hat's Off man .....!!

  • @AbhimanyuAryan
    @AbhimanyuAryan 4 ปีที่แล้ว

    sweet fast introduction...thanks

  • @nesmaashraf3427
    @nesmaashraf3427 4 ปีที่แล้ว

    thx a lot for your help you r talented really , respect for you from Egypt :)

  • @Stan_144
    @Stan_144 3 ปีที่แล้ว

    Capsule networks are the right path forward. They also have some similarities to Jeff Hawkins' ideas.

  • @KulvinderSingh-pm7cr
    @KulvinderSingh-pm7cr 6 ปีที่แล้ว

    Yaan Lecun, was working on Back prop earlier than Geoff, though Geoff's version got popularized more in the community.

  • @luck3949
    @luck3949 6 ปีที่แล้ว +1

    You have a DeepMind T-shirt, do you work there, or did you win it in some sort of competition? Or what?

  • @layeroftranslation
    @layeroftranslation 6 ปีที่แล้ว

    Cool!

  • @ehfo
    @ehfo 6 ปีที่แล้ว

    love it ! thank siraj!

  • @souravgames
    @souravgames 6 ปีที่แล้ว

    Thanks for your videos. what you are doing is amazing. a small request can you make a live video on recommendations system or market basket analysis like apriori. thanks a lot in advance..

  • @inlustrolearningprivatelim4868
    @inlustrolearningprivatelim4868 6 ปีที่แล้ว

    Hey Siraj am a huge fan of your vids. You are doing an awesome job with your lucid explanations. I am quite new to machine learning (deep learning in particular). Is there any particular order you would recommend to go through your videos in so as to get a comprehensive outlook of the content? Also I heard one of your videos wherein you were talking about the intersection of AI and blockchain in the creation of DAOs. I am working on that right now. It's truly inspiring to see your enthusiasm.. Hoping to see more videos on blockchains and DApps from your channel :) Once again thanks for all effort!!

  • @theakitata
    @theakitata 6 ปีที่แล้ว

    It would be great to explain a little more about the architecture when you have that nice picture of the Capsule Network already ! thanks anyway :D

  • @RoxanaNoe
    @RoxanaNoe 6 ปีที่แล้ว

    Great video Siraj!!!

  • @fabian.hertwig
    @fabian.hertwig 6 ปีที่แล้ว +3

    What is the Website He is scrolling through?

  • @user-ro4mi2td1p
    @user-ro4mi2td1p 6 ปีที่แล้ว +1

    What types of neural networks exist for data which are not images or not sequences (supervised learning)?

  • @anandsrivastava5845
    @anandsrivastava5845 4 ปีที่แล้ว +1

    Can i apply this network on Text data sets .?..because what you are explaining related to image features.

    • @TheAAMvideos
      @TheAAMvideos 4 ปีที่แล้ว

      yes you can! You have to turn your text into a matrix first. Check out this paper: arxiv.org/abs/1906.04898

  • @godbennett
    @godbennett 6 ปีที่แล้ว

    Congrats on the google Deepmind job Siraj

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 6 ปีที่แล้ว

    thank you for capsule network video

  • @parthtrehan8668
    @parthtrehan8668 5 ปีที่แล้ว

    I had a question if CNN does not provide spatial correlation, that would be because of using only one type of convolutional matrix(3x3 or 4x4), but Inception v3 uses all 2x2, 3x3 and 4x4, that can capture that eyes are above the nose. Does inception model also fail to capture spatial correlation?

  • @SaveAsss
    @SaveAsss 6 ปีที่แล้ว

    I would argue that this is more or less what Numenta are working on for a while now (old stuff). Maybe you can point out some diferences I didn't notice?

  • @jonathansettle4839
    @jonathansettle4839 6 ปีที่แล้ว

    Great video, well explained.

  • @wafaawardah3264
    @wafaawardah3264 6 ปีที่แล้ว

    Wow. Respect respect respect.

  • @ismaelgoldsteck5974
    @ismaelgoldsteck5974 6 ปีที่แล้ว

    please do a comparison between two trained networks

  • @WildAnimalChannel
    @WildAnimalChannel 6 ปีที่แล้ว

    So do the capsules store orientations of objects? I reckon the way humans recognise is like this: First we might see some features then we guess the object (also using context). Then we see what other features the object should have and where. Then we look to see if those features exist where we expect them to be. And if we don't recognise a feature we might look at sub-features and so on. Going up and down the hierarchy until we can say "ah, that's a five legged dog with carrots for eyes."

  • @MalikKlc
    @MalikKlc 6 ปีที่แล้ว +1

    Siraj what do you think about using GO language in ML (and AI in general), do you think it can take over Python in this field when there are more libraries available?

  • @gheorghegardu9239
    @gheorghegardu9239 6 ปีที่แล้ว

    I tried to train MNIST data, and I have an error,:File "C:\users\gg\capsnet.py", line 63, in build_arch argmax_idx=tf.argmax(self.softmax_v, axis=1, output_type=tf.int32) TypeError: argmax get an unexpected keyword argument 'output_type'.Do you have an idea how to fix it ?Thanks,

  • @hoyinchan343
    @hoyinchan343 6 ปีที่แล้ว

    thanks

  • @sunnyppanchal
    @sunnyppanchal 6 ปีที่แล้ว

    Great job with the video!

  • @delenlawson1251
    @delenlawson1251 6 ปีที่แล้ว

    Great Job!

  • @dimitriosmallios5941
    @dimitriosmallios5941 6 ปีที่แล้ว

    What are the weaknesses of this model? I assume that because it maximizes a prediction and maps to a specific entity(capsule), it recognizes only one class for each image, right?

  • @PoriaNikvand
    @PoriaNikvand 5 ปีที่แล้ว

    Hi. I can't find the slides you presented in the links. Is it available?

  • @DistortedV12
    @DistortedV12 5 ปีที่แล้ว

    Can we get a video on unsupervised capsule networks?? and semi supervised learning

  • @donaldderrick1595
    @donaldderrick1595 5 ปีที่แล้ว

    hey siraj remember to check the quality of your audio make sure its not peaking in the red

  • @antopolskiy
    @antopolskiy 6 ปีที่แล้ว

    Where is the link to this huge infographic about the development on the NN? I cannot find it anywhere.

    • @macshout6502
      @macshout6502 6 ปีที่แล้ว

      medium.com/@nikasa1889/the-modern-history-of-object-recognition-infographic-aea18517c318

  • @erikadistefano7582
    @erikadistefano7582 6 ปีที่แล้ว

    Amazing!

  • @WesleySoares
    @WesleySoares 6 ปีที่แล้ว

    Great Video! You said that one big problem of some NN is when the image is shifted or displaced, rotated etc. Do you think this new technique can "interpret" CAPTCHAs?

  • @hello27216
    @hello27216 6 ปีที่แล้ว

    Hey Siraj, could you please send a link to the webpage that you were using to demonstrate in this video? I couldn't find it in the description. Thanks!

    • @PSNAcademy
      @PSNAcademy 6 ปีที่แล้ว

      github.com/llSourcell/capsule_networks/blob/master/Capsule%20Networks%20What%20Comes%20after%20Convolutional%20Networks%3F.ipynb

  • @gpligor
    @gpligor 6 ปีที่แล้ว +1

    Thanks for keeping us up to date but the intro was too long. Better if you had spent some extra time explaining the capsule NN

  • @Albert-fe8jx
    @Albert-fe8jx 6 ปีที่แล้ว

    please add link to paper in the video description.

  • @kingspp
    @kingspp 6 ปีที่แล้ว

    One has to give credit to the original author of open-source implementation. Dig deeper and you will find that this is not a scalable architecture due to primitive and inefficiency in dynamic routing algorithm, however there is a new routing - em routing which might improve the routing technique!

  • @ArtyAnikey
    @ArtyAnikey 6 ปีที่แล้ว

    I like your simple explanations. This is always good.

  • @snzn3854
    @snzn3854 6 ปีที่แล้ว +2

    @Siraj Ravel Can you make a tutorial on how one can code reinforcement learning or DL, ML from scratch for example, using numpy, matplotlib, pandas etc because I assume many of us here would eventually grow tired of the limitation keras, or tensorflow has to offer. I believe it's better for people to thoroughly understand what's going on under the hood and to better let the people differentiate themselves and really understanding the mechanics, so let me know if this consideration of mine is worth a try and hope to hear from you soon.
    Thank you,

    • @ericsteinberger4101
      @ericsteinberger4101 6 ปีที่แล้ว +1

      he did it in numpy. I think even more than once

  • @xuhaodu3921
    @xuhaodu3921 6 ปีที่แล้ว +1

    Hi guys, the owner of these codes is still updating his work, so if you are interested in this work, please go to this repo: github.com/naturomics/CapsNet-Tensorflow to get the latest update!

  • @thebutlah
    @thebutlah 6 ปีที่แล้ว +1

    Please take some extra time explaining the capsule networks more in-depth, you spent only about 5 minutes on them but about 15 on regular CNNs. Thanks for the video though!

  • @dLoLe
    @dLoLe 6 ปีที่แล้ว

    You might try getting a bit more in depth with it considering it's a 20 minute video and what you told about capsules can be read in two sentences basically. Still a fun video especially for non technical people I reckon.

  • @BRANDMAW
    @BRANDMAW 6 ปีที่แล้ว

    Hey Saraj! I'm lying with laughter from your top memes pics. Maybe it pilled up somewhere? :3

  • @jonaslai5867
    @jonaslai5867 6 ปีที่แล้ว

    How is the PrimaryCaps Layer different from Grouped Convolution (3 Groups, 8 filters per group and kernel of 9x9)?

  • @feic8557
    @feic8557 6 ปีที่แล้ว

    At 6:55, about 'dropout'. Maybe you can borrow Andrew's explanation about dropout. Just a personal opinion.

  • @razorintube
    @razorintube 6 ปีที่แล้ว

    love your videos...diverse topics ...and well researched....with insight of your own......adds a different flavor to each of your video

  • @deseofintech1449
    @deseofintech1449 6 ปีที่แล้ว

    Great Stuff Siraj!!! Can capsule network better performance of sentiment analysis task as well? whats your take on it?

  • @shawz4308
    @shawz4308 6 ปีที่แล้ว

    gooooood!