Deep Learning Demystified

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 พ.ค. 2016
  • Part of the End-to-End Machine Learning School Course 193, How Neural Networks Work at e2eml.school/193
    An explanation for deep neural networks with no fancy math, no computer jargon. For slides, related posts and other videos, check out the blog post: brohrer.github.io/deep_learnin...
    Follow me for announcements: / _brohrer_
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 92

  • @kalyansunku
    @kalyansunku 7 ปีที่แล้ว +2

    Thanks Brandon Rohrer for detailed explanation of concepts. Initially I saw your video on Linear Regression which is part of How Data Science Works. Later I had to search a lot to find your channel. Once again thanks for such videos and distributing your knowledge on Data Science.

  • @reogarr
    @reogarr 7 ปีที่แล้ว

    Thanks a lot Brandon, excellent presentation! I could easily understand the technical aspects thanks to the background of neural network, which I am more than familiar with.

  • @CurtisBRO
    @CurtisBRO 7 ปีที่แล้ว

    Thank you Brandon. Very clear with great graphics. Very well explained!

  • @loremipsum1584
    @loremipsum1584 7 ปีที่แล้ว

    Great tutorial, thank you. I have some questions. When we started learning in your example, did you first randomly assume in both output neurons that he worked in morning and not in the evening and then calculate those weights and focus on bigger one (.6)? So if i understood correctly would results for second day be: first output neuron .85 and second output neuron .7? I think i am missing something here, also i am not sure where is the place of the actual data in all of this. You are great teacher and i hope that you will continue to create videos like this.

  • @kaiwang2924
    @kaiwang2924 8 ปีที่แล้ว +2

    Clear, easy to understand

  • @mbaws98
    @mbaws98 6 ปีที่แล้ว

    Love it, the best explanation of the Deep learning that I have ever seen, it connects the biomedical mechanism, and AI together, brilliant!

  • @OPHIR1809
    @OPHIR1809 7 ปีที่แล้ว +1

    Amazing, way to go Brandon it was surprisingly very easy to understand, great presentation

  • @ruslanaltukhov6637
    @ruslanaltukhov6637 7 ปีที่แล้ว +1

    the best explanation i've ever seen.

  • @NaveenArun
    @NaveenArun 8 ปีที่แล้ว +2

    Thank you, great visuals and very clear

    • @captainjack6758
      @captainjack6758 7 ปีที่แล้ว +10

      Great clear, very visuals, wow!

  • @mazito1000
    @mazito1000 7 ปีที่แล้ว

    "You can substitute magic for deep learning and it fits perfectly"
    Magic Demystified

  • @The06madmax06
    @The06madmax06 7 ปีที่แล้ว +1

    Very informative!

  • @JadeAllenCook
    @JadeAllenCook 7 ปีที่แล้ว +1

    Great talk, you did an awesome job explaining. I'm interested in seeing future applications; is there any code available online?

    • @BrandonRohrer
      @BrandonRohrer  7 ปีที่แล้ว

      Sorry, no code. These are just illustrative cartoon examples for explaining concepts.

  • @errolhusaberg3791
    @errolhusaberg3791 7 ปีที่แล้ว +57

    Wonder what happens if that cooking robot gets hold of HowToBasic videos

    • @Logan-kn3gt
      @Logan-kn3gt 5 ปีที่แล้ว +1

      that'll be epic

    • @winviki123
      @winviki123 5 ปีที่แล้ว

      that's a great question lmao

    • @bluemamba5317
      @bluemamba5317 4 ปีที่แล้ว

      Will probably poision the first best human.

  • @TheRealMrLaserCutter
    @TheRealMrLaserCutter 7 ปีที่แล้ว +1

    Hi Brandon, just watched both your talks on deep learning. I found them to be the clearest presentation so far on the subject. Thanks. Why do image recognition nets use such low res photos to train with. I mean some of the feature like blood vessels or hairs on the skin would be useful for classification. Is this a hardware issue or do higher res pics just confuse the nets.

    • @BrandonRohrer
      @BrandonRohrer  7 ปีที่แล้ว +3

      I appreciate it deeply CNC Scotland.
      There are a couple of reasons to use lower res photos. The first is that computational requirements go up dramatically (roughly polynomially) with number of pixel rows and columns.
      Second, if a feature, such as an eye, spans twice as many pixels, the features that identify it must also span twice as many pixels. That requires additional layers in the CNN, further increasing the computational requirements.
      Third, (this one is speculation on my part): doubling the resolution means that there are many more ways that the pixel inputs can combine to represent the same eye. This may fundamentally make the learning problem more difficult, at least in the way CNNS approach it. Going to higher resolution may make it easier to identify individual hairs and blood vessels, but tougher to recognize a characteristic nose profile.

  • @asdk2006
    @asdk2006 7 ปีที่แล้ว

    is there software for image recognition in deep learning and exit xml wait file

  • @godsadog
    @godsadog 7 ปีที่แล้ว

    “1.) No formalism of which we can know that it expresses correct (and only correct) thought, can capture our entire abstract thought. 2.) No formalism in which only objectively correct propositions can be derived, can, in its derivable formulae, capture all objectively obtaining conceptual relations.”
    Gödel, K. (2003). COLLECTED WORKS. (Vol. IV). (I. Oxford University Press, Ed.) Oxford: Oxford University Press, p. 521.

  • @Hathwos
    @Hathwos 7 ปีที่แล้ว +6

    Everybody try this: Listen to this explanation for deep neural networks and simultaneous play the Una Mattina album from Ludovico Einaudi in background ... lean back close your eyes and relax ...

    • @BrandonRohrer
      @BrandonRohrer  7 ปีที่แล้ว +2

      Thank you Gerd. I was looking for the perfect soundtrack. th-cam.com/video/0Bvm9yG4cvs/w-d-xo.html

    • @Hathwos
      @Hathwos 7 ปีที่แล้ว

      Your welcome ;) It just fits so perfectly ^^

    • @CariagaXIII
      @CariagaXIII 7 ปีที่แล้ว +1

      now you made it dramatic LOL

    • @Base2ShortY
      @Base2ShortY 7 ปีที่แล้ว +1

      Perfect!! :D

  • @TheRyulord
    @TheRyulord 2 ปีที่แล้ว

    Correction at 21:00
    Hierarchical Temporal Memory is not a deep learning algorithm.

  • @jcjensenllc
    @jcjensenllc 7 ปีที่แล้ว +1

    great lecture, thumbs up. One suggestion, where a lavalier mic so we can here you when walking away from podium.

  • @limitless1692
    @limitless1692 7 ปีที่แล้ว +1

    wow that was cool ,
    thanks for create this

  • @sallerc
    @sallerc 7 ปีที่แล้ว +15

    Nice video, thanks. I think it would be even easier to follow if you had an example with distinct input/outputs though (instead of the am/pm working hours example).

    • @BrandonRohrer
      @BrandonRohrer  7 ปีที่แล้ว +1

      Thanks salle rc. I think showing a classification example would be a good idea. I'll put that in my Future Work queue. In the meantime, here is an example like that, but for convolutional neural networks: th-cam.com/video/FmpDIaiMIeA/w-d-xo.html

    • @jameswall3309
      @jameswall3309 7 ปีที่แล้ว +2

      Ya, the AM/PM example is very difficult to follow

  • @juan_zapata
    @juan_zapata 7 ปีที่แล้ว +4

    The am/pm example is kind of confusing, the rest of the presentation was great

  • @johnfakename1823
    @johnfakename1823 7 ปีที่แล้ว

    I understood the 1 level example: compare the error of the output to adjust weights. but what do you do on the multi-layer example? for example, you input an image and test whether it's a dog: the system gets it wrong on the level 4 output. how do you adjust level 2 and 3 weights since we do not have a clear right/wrong for what those nodes represent?

    • @BrandonRohrer
      @BrandonRohrer  7 ปีที่แล้ว

      Hi John. This problem is solved by using the error in the final guess to train every weight in every layer. If I adjust one weight in a lower level of the CNN and my final guess gets a little bit more accurate, then I keep that adjustment. This is the power of backpropagation. It takes the error in the final guess and propagates it back through all the previous layers.

  • @MasKiller86
    @MasKiller86 7 ปีที่แล้ว +1

    your lectures are awesome , mind you make more lectures on deep neural networks :p

  • @VienTheFarmer
    @VienTheFarmer 7 ปีที่แล้ว +1

    thanks-it makes sense!

  •  6 ปีที่แล้ว

    If Estopa, an spanish band, is similar to Police or Bee Gees, something is not working well in Spotify; or Alaska y Dinarama, I love them, but they are quite different to Daft Punk.
    Thanks for the videos. You explain really well.

  • @eleakokkonen6093
    @eleakokkonen6093 6 ปีที่แล้ว +1

    I've watched a dozen of these and I got to say that this was the best. It explained the basic concept added with some computational logic without going to deep in the math right away, this seems to be a tough combination to find. Many thanks!

  • @CurtisBRO
    @CurtisBRO 7 ปีที่แล้ว

    In the early slides you showed an axon touching many dendrites of a single downstream neuron with each touching point having a given weight. Yet in the nice clean circle and stick diagrams, it seems as if you are implying that the axon of one neuron only touches a single dendrite of the downstream neuron. Could one say that if an axon touches several dendrites of a downstream neuron that in effect we can call this one single connection (and thus diagram it this way) but with the weight of those many touch points added together?

    • @BrandonRohrer
      @BrandonRohrer  7 ปีที่แล้ว +1

      Yes, you are spot on. This is a good interpretation for artificial neural networks (ANNs). In actual neurons, multiple connections probably allow for more complex functionality, but that is simplified away in ANNs.

  • @SharpenSoul
    @SharpenSoul 7 ปีที่แล้ว

    thank you very much

  • @janiswehen9077
    @janiswehen9077 6 ปีที่แล้ว

    is that moning at 1:32

  • @NinjaDoge
    @NinjaDoge 7 ปีที่แล้ว

    but what is gradient and delta @14:38

  • @Trackman2007
    @Trackman2007 7 ปีที่แล้ว +23

    That am-pm example is horrible. It is so hard to have a clear picture in mind of those am/pm, am/pm. Would be better to have something easily imaginable.

    • @BrandonRohrer
      @BrandonRohrer  7 ปีที่แล้ว +10

      Noted Trackman2007. I'll put my thinking hat on and see if I can find something more intuitive for the next go-round.

    • @Trackman2007
      @Trackman2007 7 ปีที่แล้ว +4

      Thank you sir!

    • @craighalpin9521
      @craighalpin9521 6 ปีที่แล้ว

      Brandon Rohrer how is the job of each neuron decided? After the network is trained is the programmer able to figure out what each neuron is doing? How is it decided how many neurons on each level and how many layers it has?

    • @davidguaita
      @davidguaita 5 ปีที่แล้ว

      haha Sorry Brandon Rohrer, but I agree. It's very messy with all those am/pm. The rest of the presentation is very clear, thanks.

  • @MehranZiadloo
    @MehranZiadloo 7 ปีที่แล้ว

    I watched this hoping I'll learn something new about Deep Learning but all he talked about was Multi-Layer Perceptron. I haven't had the chance to study Deep Learning yet but I'm pretty sure it's not just a new name to MLP!

  • @ecotech2624
    @ecotech2624 2 ปีที่แล้ว

    The audio is broken.

  • @mythoughts9724
    @mythoughts9724 7 ปีที่แล้ว +10

    Audio volume is rising and falling.

  • @peregudovoleg
    @peregudovoleg 5 ปีที่แล้ว

    It is interesting, but the sound is off. The speaker should have carried the mic on him.

  • @AkinJanet
    @AkinJanet 6 ปีที่แล้ว

    GREAT

  • @shailusingh
    @shailusingh 7 ปีที่แล้ว

    Great

  • @CaptainLoony
    @CaptainLoony 6 ปีที่แล้ว +1

    how is nobody bothered that he's showing the slideshow the wrong way?

  • @Brainbuster
    @Brainbuster 7 ปีที่แล้ว +52

    Play at 1.5x playback speed. =)

    • @oonmm
      @oonmm 7 ปีที่แล้ว +3

      Nice, now I can watch more videos about the subject in a shorter time!

    • @georgeyiu4747
      @georgeyiu4747 7 ปีที่แล้ว

      Niskinatorn

    • @oonmm
      @oonmm 7 ปีที่แล้ว +1

      That's a-me!

    • @richardsatoru
      @richardsatoru 7 ปีที่แล้ว

      Ha! Just scrolled down to share the same tip!

    • @gauthamgajith9684
      @gauthamgajith9684 7 ปีที่แล้ว

      i did the same :)

  • @gmshadowtraders
    @gmshadowtraders 8 ปีที่แล้ว +1

    From an owl to a fighter jet....you gotta love the hype :)

  • @DrWeldonTeixeira
    @DrWeldonTeixeira 7 ปีที่แล้ว

    Santos Dumont's Air Plane is the true Air Plane. lol

  • @uncommon_common_man
    @uncommon_common_man 4 ปีที่แล้ว

    Audio is low

  • @EddieKMusic
    @EddieKMusic 7 ปีที่แล้ว +2

    The am, pm example made no sense

    • @Pablosaurus
      @Pablosaurus 7 ปีที่แล้ว

      I can kind of see what he's getting at with that one but it seems kind of a backward example.

  • @DrWeldonTeixeira
    @DrWeldonTeixeira 7 ปีที่แล้ว

    Santos Dumont Air Plane. :D

  • @carlosperez66
    @carlosperez66 7 ปีที่แล้ว

    Why is HTM even in the list of Deep Learning technologies??!

    • @chordogg
      @chordogg 7 ปีที่แล้ว +1

      Calm down.

    • @oonmm
      @oonmm 7 ปีที่แล้ว

      Yeah what's up with that!?!??

  • @moahaimen
    @moahaimen 7 ปีที่แล้ว

    hi Mr Brandon Rohrer i need to use Deep Learning in character recognition could you help me please thank you

  • @kjnoah
    @kjnoah 7 ปีที่แล้ว

    Are not numbers just names we give to quantities or values? And are not names just variables with multiple possible values based on context? Maybe it would be better to talk about value vs variable instead of name verses number.

    • @kjnoah
      @kjnoah 7 ปีที่แล้ว

      I could say value vs context, but then that is the definition of a variable.

  • @sexypoulet6273
    @sexypoulet6273 7 ปีที่แล้ว +2

    Skynet

  • @paulwillisorg
    @paulwillisorg 5 ปีที่แล้ว

    OrchOr. Brains are quantum

  • @chd9841
    @chd9841 6 ปีที่แล้ว +1

    I got bored... Thoroughly bored....wanted to run away....but was forcing myself every moment to.complete the video..I slept in the end...

  • @oohoohayy4538
    @oohoohayy4538 5 ปีที่แล้ว

    Lvl up Intelligence 1,000

  • @mathieuclerte5252
    @mathieuclerte5252 6 ปีที่แล้ว +2

    the am/pm example is not great, @10:50, you kind of lost me with this example. I was completetly unable to understand what comes after... Really, choose your examples wisely.