Deep Learning - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ธ.ค. 2024

ความคิดเห็น •

  • @rachael3533
    @rachael3533 6 ปีที่แล้ว +468

    Let's all take a moment to recognize he's explaining complex math and computing in a second language

    • @rosspidoto
      @rosspidoto 5 ปีที่แล้ว +54

      Native English speakers constantly take this for granted.

    • @noahwhipkey6262
      @noahwhipkey6262 5 ปีที่แล้ว +12

      @@rosspidoto Most people speak english nowadays. Kinda the defacto standard of technology.

    • @rosspidoto
      @rosspidoto 5 ปีที่แล้ว +7

      @@noahwhipkey6262 And what would your native language be then?

    • @noahwhipkey6262
      @noahwhipkey6262 5 ปีที่แล้ว +1

      @@rosspidoto Hmmm. Take a guess ;)

    • @vikranttyagiRN
      @vikranttyagiRN 5 ปีที่แล้ว +9

      Yupe exactly what i was thinking. Not an easy task at all

  • @iLLt0m
    @iLLt0m 8 ปีที่แล้ว +187

    Machine learning courses always with the damn house prices lol.

    • @Jorge-ub3we
      @Jorge-ub3we 6 ปีที่แล้ว +3

      ikr

    • @WhoForgot2Flush
      @WhoForgot2Flush 6 ปีที่แล้ว +4

      right?

    • @MubashirullahD
      @MubashirullahD 5 ปีที่แล้ว

      hahaha

    • @hattrickster33
      @hattrickster33 4 ปีที่แล้ว

      @@MubashirullahD you know?

    • @MubashirullahD
      @MubashirullahD 4 ปีที่แล้ว +2

      @@hattrickster33 It is in week 1 of Coursera's Machine Learning Course taught by Andrew Ng. Best course out there. It is natural others would use the same example.

  • @tylerwaite8444
    @tylerwaite8444 8 ปีที่แล้ว +19

    This makes much more sense than "a neural network is just a combination of inputs and outputs with a hidden layer in between" that you get from a lot of other videos. Awesome!!!

    • @Triantalex
      @Triantalex 21 วันที่ผ่านมา

      false.

  • @Sneezy103
    @Sneezy103 8 ปีที่แล้ว +235

    Please go deeper into this subject, it's really interesting!

    • @112BALAGE112
      @112BALAGE112 8 ปีที่แล้ว +28

      +David T no pun indended

    • @judgeomega
      @judgeomega 8 ปีที่แล้ว +2

      +David T The most interesting subject to ever exist in the entire universe; Intelligence.

    • @INLF
      @INLF 8 ปีที่แล้ว +6

      if you're really interested in the subject go watch Andrew Ng's machine learning course at coursera and for a even deeper understanding Geoffrey Hinton's course "Neuronal networks for deep learning".
      The AI explanations on this channel are real bad...

    • @sanjacobo84
      @sanjacobo84 8 ปีที่แล้ว +3

      +INLF Great video! I started the cousera machine learning four weeks ago. Really cool.

    • @Vulcapyro
      @Vulcapyro 8 ปีที่แล้ว +4

      +David T +INLF I highly recommend the CS231n course at Stanford for convnets if you have some background in machine learning. They have been making incredible efforts towards disseminating the material and have _excellent_ notes as well as lecture recordings.

  • @boenrobot
    @boenrobot 8 ปีที่แล้ว +203

    Damn... I didn't noticed the whole "right" thing until I read the comments (half way through the video)... and suddenly, I can't unhear it (during the other half).

    • @djdedan
      @djdedan 8 ปีที่แล้ว +1

      +boenrobot i know! Now I cant un-hear it, either! ARGH!

    • @Che8t
      @Che8t 8 ปีที่แล้ว +3

      +DjDedan
      right?

    • @ozdergekko
      @ozdergekko 8 ปีที่แล้ว +1

      +boenrobot -- it literally made me stop watching. Am I left? (yes, I am. Very)

    • @flecksstudio725
      @flecksstudio725 7 ปีที่แล้ว +1

      you *could say* they *more or less* made you pay attention to it, *right*?

    • @sisakhoza4739
      @sisakhoza4739 7 ปีที่แล้ว +1

      Just fell into the same trap

  • @LILLJE
    @LILLJE 8 ปีที่แล้ว +358

    The camera guy is called Wright, right?

    • @Gehr96
      @Gehr96 8 ปีที่แล้ว +23

      +LILLJE I think it's rightly written Rite, right?

    • @fightocondria
      @fightocondria 8 ปีที่แล้ว +1

      +Gehr96 ->

    • @Squidward1314
      @Squidward1314 8 ปีที่แล้ว +24

      Wow! At first I didn't notice it at all. But he says it after EVERY sentence!

    • @blockchaaain
      @blockchaaain 8 ปีที่แล้ว +20

      Hahaha, I love that he changes it to "yeah" when he had to end a sentence on "wrong".

    • @ToyotomiHideyoshiGre
      @ToyotomiHideyoshiGre 8 ปีที่แล้ว +1

      Wright?
      Phoenix Wright!!

  • @54m0h7
    @54m0h7 8 ปีที่แล้ว +2

    This reminds me of an Excel I made for work that estimates the cost of building a custom Air handler. I formatted it to print on 8.5x11, and was 20 pages when it was done. Each page was inputs/calcs for a different part, like Supply Fans, Cooling Coils, Humidifiers, etc. So by saying 'yes', you want a humidifier, it would not just add the cost of the humidifier, and install labour, but also add some length to the unit, which meant more framing cost, and added to the weight of the unit in both areas.

  • @NikiHerl
    @NikiHerl 8 ปีที่แล้ว +115

    His english takes a little getting used to but I actually feel like I understand neural networks now :D

    • @certee
      @certee 8 ปีที่แล้ว +23

      I think we've all become real estate agents more

    • @BurnabyAlex
      @BurnabyAlex 8 ปีที่แล้ว +10

      +Niki Herl varry yabbles.

    • @recklessroges
      @recklessroges 8 ปีที่แล้ว

      +Niki Herl I agree. How is this applied to code. Is it better to use a functional language or an imperative one. So many questions, and can machine learning teach us the answers to those?

    • @Toschez
      @Toschez 8 ปีที่แล้ว +2

      +Niki Herl Right?

    • @nicolasroux6999
      @nicolasroux6999 8 ปีที่แล้ว +2

      +Reckless Roges Better yet: use a multi-paradigm language. It ought to be fast, too. Which why most machine learning libraries are written in c++ for efficiency, sometimes with python bindings for ease of use.

  • @panzach
    @panzach 8 ปีที่แล้ว +4

    Thank you, I think it's the first time I see someone trying to explain multilayer perceptrons intuitively.

  • @TheAnimystro
    @TheAnimystro 8 ปีที่แล้ว +25

    Your video about alpha go intrigued me so much that I have been researching machine learning since, and compared to the other videos and technical explanations I have seen and read, this video is by far the easiest to understand and is the best at actually explaining how it works. It's just a shame that I had already invested at least 5 hours of my time to understand what you have said in only 11 minutes (though I did get to see some neural evolution networks fight each other gladiator style, so I guess there are positives and negatives hehe)

    • @omarshoura6327
      @omarshoura6327 2 ปีที่แล้ว +2

      Where are you now in your research out of curiosity

    • @Triantalex
      @Triantalex 21 วันที่ผ่านมา

      ok?

  • @Phagocytosis
    @Phagocytosis 8 ปีที่แล้ว +9

    This video actually gives a very good and clear explanation of the concepts. Excellent job!

  • @andik70
    @andik70 8 ปีที่แล้ว +23

    x_1 = location, x_2 = location, x_3 = location

    • @inin-id6bw
      @inin-id6bw 6 ปีที่แล้ว +1

      better than all the "right" comments combined

  • @NdamuleloNemakh
    @NdamuleloNemakh 6 ปีที่แล้ว +2

    This is the best overview of what deep learning is

  • @bakmanthetitan
    @bakmanthetitan 8 ปีที่แล้ว +95

    Excellent video, I love this topic! Definitely do more on machine learning.

  • @Carrosive
    @Carrosive 8 ปีที่แล้ว +7

    Complicated, right?

  • @dries2965
    @dries2965 8 ปีที่แล้ว +2

    I like the initiative explanation that a neural network can approximate all functions to an arbitrary precision. The more layer (the deeper the network), the better the approximation. Couple that with the backpropagation algorithm, and you see, if there is a function between the in- and output, then given enough training data, you can approximate it.

  • @woksin
    @woksin 8 ปีที่แล้ว +165

    Right

    • @Adraria8
      @Adraria8 8 ปีที่แล้ว +2

      Left

    • @blahblah9941
      @blahblah9941 8 ปีที่แล้ว +1

      or middle....

    • @woksin
      @woksin 8 ปีที่แล้ว

      true reality Right

    • @LaughterOnWater
      @LaughterOnWater 8 ปีที่แล้ว

      Is this better or worse than "You know" or "Like" as a punctuative crutch?

    • @l3p3
      @l3p3 8 ปีที่แล้ว

      Right?

  • @g.livne98
    @g.livne98 8 ปีที่แล้ว +6

    1) I so regret I read all the comments on the "right" thing because it was very hard to listen after doing so.
    2) Artificial neural networks work mainly on learning from examples - using an example database of how what you want to achieve is done correctly, and you give your learning network a "grade" for how close it is to the correct value on the examples you train it on In order for it to learn the problem and tell you the value for examples you don't know the answer for.
    Was just pretty sure he wasn't clear enough about it but maybe i missed something because of the "right" thing...

  • @tobiasztopczewski8089
    @tobiasztopczewski8089 8 ปีที่แล้ว +56

    at the start: "Maybe it gets slightly mafia".
    (mathy)

    • @Triantalex
      @Triantalex 21 วันที่ผ่านมา

      false.

  • @gohorseriding
    @gohorseriding 8 ปีที่แล้ว +4

    I'm left feeling kind of confused. How does the deep learning algorithm correct for overfitting? Is the algorithm trying to make the most parsimonious configuration? And do parameter configurations get penalised according to their complexity? Or am I missing the point?

    • @compuholic82
      @compuholic82 8 ปีที่แล้ว +1

      +Tom Swinfield Good question. The guy in the video left out an important part. While he is right that there is no reason to stop at the second layer there is a reason why deep learning has caught on just yet (aside from the computational resources needed). If you start to make a fully connected network deeper you will (a) very likely overfit and (b) have troubles to find a good initialization for your weights.
      The new generation of deep networks are convolutional networks which restrict the connectivity between neurons in the first layers. Each neuron only has a small receptive field towards the neurons of the previous layer. This helps to regularize the model.
      This probably wouldn't work very well with the example he provided since there is no reasons why neighboring input values should be correlated with each other (This is the problem when using hand-crafted features). But those convolutional structures work great with real-world data like images or sound. If you look at a pixel in an image you can usually predict the value of the neighboring pixel pretty well.

  • @nycsaba
    @nycsaba 8 ปีที่แล้ว +1

    Nice channel but, that pen is driving me crazy. Could you please filter out that scratchy noise of the pen from the audio line in feature videos pls :(

  • @calinmcosma
    @calinmcosma 8 ปีที่แล้ว +15

    Nice article. I want to hear more from this guy.
    Also, as a C.S. student, I would want some materials from these professors. I watched a lot of videos from Computerphile and I would really like some kind of references from interesting/favorite books that the professors recommend. I love the videos, but sometimes I like to go deeper into the subject. And yes, I do have internet, and I do know how to search things, but also, I like reading books, I like collecting C.S. books, so, I don't mind stacking some more in my collection.
    Thank you, and keep up the good work.

    • @Triantalex
      @Triantalex 21 วันที่ผ่านมา

      ok?

  • @Dima-rj7bv
    @Dima-rj7bv 2 ปีที่แล้ว +1

    Such a beautiful, step-by-step explanation of a complicated concept. I love it!

  • @finlaymcewan
    @finlaymcewan 8 ปีที่แล้ว +21

    Google's system that can identify what an image contains uses a neural network that is over 30 layers deep!

    • @Shortninja66
      @Shortninja66 8 ปีที่แล้ว +6

      Yeah that's cool but can you link a source

    • @finlaymcewan
      @finlaymcewan 8 ปีที่แล้ว

      +Vulcapyro that's incredible

    • @michaelpound9891
      @michaelpound9891 8 ปีที่แล้ว +1

      +Shortninja66 It's called googlenet, if you search for it, the pdf is easily found. GoogleNet, VGG and others are some of the top contenders at the moment for large-scale classification. Microsoft's Deep Residual learning is another effort that looks to be the best yet, they even tried 1000 layers!

    • @Shortninja66
      @Shortninja66 8 ปีที่แล้ว

      Michael Pound Yeah what Microsoft has tried is pretty cool

    • @med5032
      @med5032 5 ปีที่แล้ว

      what is up with TH-cam comments and never sourcing anything?

  • @AwesomeCrackDealer
    @AwesomeCrackDealer 8 ปีที่แล้ว +15

    But how do you actually do it, like the programming?

    • @alextotheroh8624
      @alextotheroh8624 8 ปีที่แล้ว +2

      +Fuvity Weka is a tool worth looking into. Python has machine learning libraries and so does R. You can also do some interesting things with Amazon Machine Learning, but I can't remember if it's accessible under the free tier. When I tried it after Amazon Machine Learning first came out, it was free. Python and R will require lots of research and understanding while Weka and Amazon Machine Learning are more user friendly.

    • @SuviTuuliAllan
      @SuviTuuliAllan 8 ปีที่แล้ว +1

      +Fuvity I'm in the process of getting drunk so that I can work on a JavaScript library for doing deep learning on the GPU. Do you want me to write a tutorial as well? I'll add some nice ponies in it for you!

    • @AwesomeCrackDealer
      @AwesomeCrackDealer 8 ปีที่แล้ว

      Thanks, everyone! I'll check everything out, but I'm just a begginer programmer. I only know C, data structures and am learning Java and OOP

    • @bool.
      @bool. 8 ปีที่แล้ว +4

      +Fuvity Start with your input values. For every combination of input values, create a node in your first "layer" of your program. Each node has some function which takes its inputs, modifies them based on some internal weights, and then combines them somehow, returning the result. Repeat this, using the outputs of each layer as the inputs for the next layer for as many layers as you want your algorithm to have (being careful not to use too many to "over-fit" your algorithm).
      When it comes to training your algorithm, all you need to change is the weights in each of the nodes. Figuring out HOW to find the best weights is the hard part, and I don't think I could describe the ways to do it here.
      Ideally, you'll end up with combinations of inputs with no meaning being given very low weights when their values are used, and those which do mean things will be weighted more strongly.

    • @beegieb4210
      @beegieb4210 8 ปีที่แล้ว +5

      +Fuvity Tensor Flow, Keras, and Lassagne are excellent starting tools.
      I recently built a recurrent neural network chat bot with ~1,000,000 parameters using a GPU on Amazon Web services using Tensor Flow. It's actually pretty straight forward since it takes care of differentiation for you.
      That's the hardest bit in building neural networks: computing gradients efficiently and accurately. Thankfully TensorFlow takes care of that for you :)

  • @this_is_mac
    @this_is_mac ปีที่แล้ว

    Wow, I didn't understand the reasoning behind the hidden layers till now!!!

  • @ibosnfs1997
    @ibosnfs1997 8 ปีที่แล้ว +226

    RightMan

    • @iviasterzox22
      @iviasterzox22 8 ปีที่แล้ว +3

      +ibrahim özcan Right? You tell me!

    • @ibosnfs1997
      @ibosnfs1997 8 ปีที่แล้ว +9

      IVIaster Zox Right! Right?

    • @petebenedetti435
      @petebenedetti435 8 ปีที่แล้ว

      +ondsinet _ (nik man) right

    • @fransezomer
      @fransezomer 8 ปีที่แล้ว

      +ibrahim özcan dude....lolololololoolooololoollllll....

    • @andrew1257
      @andrew1257 6 ปีที่แล้ว

      DAMN YOU NOW I CAN'T UNHEAR IT

  • @mrbatweed
    @mrbatweed 8 ปีที่แล้ว +21

    Probably a bit more planning needed for a video attempting to teach at the intended level... A bit of deep teaching needed to go with the deep learning.

    • @karlkastor
      @karlkastor 8 ปีที่แล้ว +3

      +Mr Batweed yeah, it seemed like he was making it up on the go. Oh, I've not talked about Overfitting yet, let's throw this in.

  • @LarlemMagic
    @LarlemMagic 8 ปีที่แล้ว +3

    I had been seeking someone explain how to build a deep learning algorithm, thank you so much I *understand* now!

  • @jrobinson2095
    @jrobinson2095 4 ปีที่แล้ว +1

    Nice explanation, it's cool to think about how the combination of different data can get so complicated

  • @TonyHammitt
    @TonyHammitt 8 ปีที่แล้ว +3

    Really a great explanation with a wonderfully relatable example. Very nicely done!

  • @smileyball
    @smileyball 8 ปีที่แล้ว

    How about a discussion about generative adversarial networks? They're fairly intuitive to explain and computerphile viewers will probably enjoy it.

  • @shipper611
    @shipper611 8 ปีที่แล้ว +2

    Super interesting, I'm wondering: what is the difference between DeepLearning and structural equation modeling?? Mathematical it seems really similar, doesn't it?

    • @Vulcapyro
      @Vulcapyro 8 ปีที่แล้ว

      +shipper611 SEM is so different that I'm not even quite sure how to start explaining how they're different lol

    • @shipper611
      @shipper611 8 ปีที่แล้ว

      okay, but isn't it also basically a regression with intermediate values?

    • @Vulcapyro
      @Vulcapyro 8 ปีที่แล้ว +1

      shipper611
      You can use neural networks to perform regression, but using them for something like regression could be pretty overkill due to the difference in complexity of the problem versus the representational power of nets. You can definitely find elements of regression techniques scattered about though, particularly considering a single neuron often looks like it performs some high-dimensional linear regression with a non-linear function applied afterwards.

  • @sau002
    @sau002 7 ปีที่แล้ว

    I think you are on the right track. Please continue this series with real life examples.

  • @fxtech-art8242
    @fxtech-art8242 ปีที่แล้ว +1

    best explanation i have seen so far !

  • @xenathcytrin202
    @xenathcytrin202 6 ปีที่แล้ว

    Now what would happen if you took an output and had it affect something further down the tree?
    I wonder if that kind of feedback loop could be useful in some way.

  • @casperTheBird
    @casperTheBird 5 ปีที่แล้ว

    So are the groupings of variables decided by people or by the AI machine?

  • @mattmatty1234
    @mattmatty1234 8 ปีที่แล้ว

    What is an example of bias term from his house explanation?

  • @karlkastor
    @karlkastor 8 ปีที่แล้ว

    good Video! One slight critisism on the graphics: Typically Neural Networks are fully connected meaning every neuron is connected to every neuron on the layer before it.

    • @compuholic82
      @compuholic82 8 ปีที่แล้ว

      +Karl Kastor Traditionally, yes. But the generation of deep neural networks that have been making news lately are different. They are mainly convolutional neural networks which only have a small receptive field with regard to the previous layer. Also neighboring neurons share their connection weights which mathematically represents a convolution.
      The last layer(s) however - which typically are the actual classifier - are fully connected again.

    • @karlkastor
      @karlkastor 8 ปีที่แล้ว

      compuholic82
      I know how ConvNets work, but I think a vanilla/ fully conncted network would be a better first example.

  • @ethanpet113
    @ethanpet113 8 ปีที่แล้ว

    Just how much tractor feed do you guys have over at Nottingham?

  • @Nono-de3zi
    @Nono-de3zi ปีที่แล้ว

    Something that is not very clear in this videos (and many similar) is that weights can be *negative*. So we can encode (A AND B) OR ((A AND C) AND *NOT* B). That would be really relevant for house values.

  • @henrypostulart
    @henrypostulart 8 ปีที่แล้ว +6

    His verbal, right? Ticks, right? Are massively, right? Annoying, right?

  • @billschannel1116
    @billschannel1116 8 ปีที่แล้ว

    I was interested in explaining this to a friend but I didn't know where you can buy dot matrix sheet fed paper. Does anyone know if Cobol programmer's paper will do?

  • @mattlm64
    @mattlm64 8 ปีที่แล้ว +81

    Left.

    • @joelcoll4034
      @joelcoll4034 7 ปีที่แล้ว

      Matthew Mitchell right

  • @jacobdawson2109
    @jacobdawson2109 8 ปีที่แล้ว

    The way it's described, it sounds like each node is a linear combination of it's inputs. In which case, the output node is just some polynomial of the input nodes. To me, it seems like some non-linear combination of nodes would be needed to make the network truly useful, or am I missing something?

  • @smzaccaria
    @smzaccaria 8 ปีที่แล้ว

    When you say "active" in a hierarchy do you mean marking something as "is" or "is not" by using binary variables 1's and 0's?

  • @TechyBen
    @TechyBen 8 ปีที่แล้ว +7

    Ah, so it's extra granularity in the detail it can find.

    • @nicolasroux6999
      @nicolasroux6999 8 ปีที่แล้ว +7

      +TechyBen The amazing part, as the guy pointed out, is that you (as the coder/user) don't even have to understand, nor know the existence of, underlying concepts. The neural net just deduces them if it can, given sufficient learning material and a proper layout.

  • @dhanshreea
    @dhanshreea 8 ปีที่แล้ว +2

    Thank you for this video, I finally understand what all the technical jargon about dependencies between intermediate layers means. Thank you!

    • @KatsarovDesign
      @KatsarovDesign 8 ปีที่แล้ว

      Can you explain it please? I also have troubles understanding this...

    • @dhanshreea
      @dhanshreea 8 ปีที่แล้ว

      What can you not understand?

    • @KatsarovDesign
      @KatsarovDesign 8 ปีที่แล้ว

      Dhanshree Arora The dependencies between the layers

  • @Yorgarazgreece
    @Yorgarazgreece 8 ปีที่แล้ว +5

    You almost pulled two 360s with all those rights :P

  • @MrXperx
    @MrXperx 8 ปีที่แล้ว

    We can do a principal component analysis and reduce the extra variables right?

    • @tinyman392
      @tinyman392 8 ปีที่แล้ว

      Depends on the algorithm. Some neural nets actually do feature section implicitly. For example, a convolutional neural network will select its own features to feed into a standard, fully-connected neural network. A decision tree also does its own feature selection by using a greedy algorithm for learning. Depends on the algorithm class chosen and inputs. Obviously you can still do your own before sending it in to give the learning algorithm a "hint" (I use that term a little loosely).

  • @jamesqiu6715
    @jamesqiu6715 7 ปีที่แล้ว +1

    The result is right there on your left, sir!

  • @danielgrace7887
    @danielgrace7887 8 ปีที่แล้ว

    If only a linear amount of a variable is feed into the network then how can the resulting value go down and then back up, wouldn't that require a quadratic function?

  • @javierguerrero9486
    @javierguerrero9486 8 ปีที่แล้ว

    I feel like the video started off late, like it seems they talked about this topic more in depth but didn't begin recording after deep in the conversation.

  • @i.p.knightly149
    @i.p.knightly149 6 ปีที่แล้ว +3

    Homework: develop an algorithm to calculate how many times he says 'right' in one video.

  • @user-ny7sg9mz1v
    @user-ny7sg9mz1v 4 ปีที่แล้ว

    wow, i never thought it that way. What happens in hidden layers are quite similar to factor analysis

  • @dh4817
    @dh4817 8 ปีที่แล้ว +54

    at the minute 4:06 i counted 25 rights, rght ?

    • @zoranhacker
      @zoranhacker 8 ปีที่แล้ว +9

      right

    • @Tortoise7597
      @Tortoise7597 8 ปีที่แล้ว +1

      +Diego A Hernàndez right

    • @DrSpooglemon
      @DrSpooglemon 7 ปีที่แล้ว

      I did notice a couple of yeahs...

    • @joelcoll4034
      @joelcoll4034 7 ปีที่แล้ว

      Diego A Hernàndez right

    • @stewiegriffin6503
      @stewiegriffin6503 7 ปีที่แล้ว +14

      He said it only 84 times. you are welcome:
      0:08 0:31 0:51 1:16 1:21 1:28 1:33 1:40 1:44 1:51 1:59 2:02 2:04 2:17 2:21 2:32 2:38 2:40 2:43 3:02 3:08 3:10 3:14 3:18 3:25 3:32 3:35 3:43 3:46 3:48 3:50 3:53 4:03 4:10 4:14 4:26 4:33 4:50 4:52 5:02 5:06 5:13 5:17 5:46 5:55 6:00 6:04 6:07 6:16 6:34 6:39 6:43 6:47 6:57 7:04 7:08 7:30 7:34 7:35 7:46 7:48 7:52 7:55 8:06 8:15 8:27 8:38 8:43 8:56 9:03 9:13 9:18 9:20 9:26 9:31 9:51 9:54 10:08 10:11 10:15 10:23 10:27 10:29 10:35 10:43

  • @Frexican54
    @Frexican54 8 ปีที่แล้ว +1

    so its a really complicated multiple linear regression, basically?

    • @offchan
      @offchan 8 ปีที่แล้ว +2

      No, if a network contains only weighted sum of input like linear regression do then it will just be a big linear function no matter how deep your network is.
      We need non-linear component inside the network and that is called the activation function.

    • @knowlen
      @knowlen 8 ปีที่แล้ว

      close, it's more like multinomial logistic regression -but with nonlinear activation on hidden layers. It's still basically just regression though, and if you can understand linear regression you won't have trouble implementing this.

    • @Frexican54
      @Frexican54 8 ปีที่แล้ว

      Nick Knowles yeah I took a class on linear and time series regression.

    • @knowlen
      @knowlen 8 ปีที่แล้ว +1

      Yeah time series regression is like another level above this imo. If you want to do everything with neural nets this is kind of the break down: use LSTM networks for time series, Convolutional networks for image recognition, deep ANN for classification/prediction. Then there's Deep Q Networks (super popular in the research community at the moment), which are used for general artificial intelligence. The progression of Machine Learning curriculum right now (at least at my university) is usually a path from linear reg->logistic reg->ANN, with maybe some support vector machine (beats ANN when data is less abundant, but high dimensional) and clustering stuff thrown in for diversity.

  • @lewisheriz
    @lewisheriz 2 ปีที่แล้ว

    3:23 He says 'capture fake correlations', not 'cut through a thick of relations'. Very important point, so thought it worth clarifying. Clearly captioning AI needs more training data for Spanish accents in English ; )

  • @js_models
    @js_models 4 ปีที่แล้ว

    Overfitting might give you an answer that is wrong right?

  • @ByteBitTV
    @ByteBitTV 8 ปีที่แล้ว +3

    RIGHT?

  • @selbstwaerts
    @selbstwaerts 7 ปีที่แล้ว +1

    I rightfully think he's right, right?
    The video even ENDS with "right" :D
    Besides this observation: Splendid explanation. Thanks.

  • @l3p3
    @l3p3 8 ปีที่แล้ว +3

    Why does he end so many sentences with "right"? Asks he or states he that the sentence is right?

    • @Froggeh92
      @Froggeh92 8 ปีที่แล้ว +3

      Its a punctuating crutch that a lot of professors use to make sure the listener is following what they said.

    • @Froggeh92
      @Froggeh92 8 ปีที่แล้ว +1

      Its a punctuating crutch that a lot of professors use to make sure the listener is following what they said.

  • @TheSidder1
    @TheSidder1 8 ปีที่แล้ว +8

    IMAGINE him as your GPS voice:
    "In 100 meters turn left, right."
    "Now turn left, right"

  • @shaecloud4403
    @shaecloud4403 8 ปีที่แล้ว

    NLP right?

  • @willwong4804
    @willwong4804 7 ปีที่แล้ว

    This is essentially structural equation modelling (SEM). CFA to be specific. With some level of automation and a fancier name.

  • @oooBASTIooo
    @oooBASTIooo 8 ปีที่แล้ว +6

    I think he's right.

  • @endod8708
    @endod8708 7 ปีที่แล้ว

    after reading the comments i can not unhear it. i am not the only one right ?

  • @wnh79
    @wnh79 6 ปีที่แล้ว

    This is a very simple and clear explanation of how NN work.

  • @unverifiedbiotic
    @unverifiedbiotic 8 ปีที่แล้ว +4

    Guys, use noise reduction for the audio, it's not that hard.

    • @deltagamma1442
      @deltagamma1442 8 ปีที่แล้ว +1

      +Michał Pawlak I'm no cyborg.

    • @aajjeee
      @aajjeee 8 ปีที่แล้ว

      Thats way too complicated for a machine to do

    • @unverifiedbiotic
      @unverifiedbiotic 8 ปีที่แล้ว +1

      Don't understand those comments, is it some kind of recurring joke on this channel? You only need to import the audio recorded with the video to Audacity (free software), find a moment in the track where the guy doesn't talk and select at least a second or two of the "silence". You select the noise reduction option from a dropdown list, adjust the parameters and go back to the track. Select a bit of the track where the guy talks and go back to noise reduction - you can then click preview in order to determine if the default parameters for noise reduction work well for the audio. If you're not satisfied, you tweak the three settings in order to get the the static noise and artifacts. You then go back again, select the entire audio track, go back to noise reduction, set the desired parameters again (they reset after you exit) and apply them. Export the audio and replace the original audio with the new track in your video editor. Problem solved, no static from a shitty integrated mic, no droning sound from the air conditioning, no noise from cars passing by outside, and the speaker doesn't have to soudn "robotic" or "tinny" after you do this, you just need to put a little effort into it. I don't understand why people don't do this with every video, while they take their time to make animations and whatnot.

    • @aajjeee
      @aajjeee 8 ปีที่แล้ว

      Michał Pawlak its ironic because your comment makes it seem hard when it isnt, its not a recuring joke but it is a joke

    • @unverifiedbiotic
      @unverifiedbiotic 8 ปีที่แล้ว

      Ok, if you've got a better way to introduce someone to the concept then go right ahead.

  • @orisicum
    @orisicum 8 ปีที่แล้ว +20

    left?

  • @Diggnuts
    @Diggnuts 8 ปีที่แล้ว +11

    Regarding the subject and how I though it was explained in this video, I tend to believe this production was a bit of a parker square....

    • @Diggnuts
      @Diggnuts 8 ปีที่แล้ว

      Google User
      Nah, I'll just remove you petulant tripe and block you.

    • @therflash
      @therflash 8 ปีที่แล้ว +1

      +Diggnuts stop making parker square a thing!!!

    • @Diggnuts
      @Diggnuts 8 ปีที่แล้ว

      therflash Perhaps I will, but that will just make my attempt at it ... a Parker square..

    • @Triantalex
      @Triantalex 21 วันที่ผ่านมา

      ok?

  • @MushookieMan
    @MushookieMan 4 ปีที่แล้ว

    If the relationship between these variables is linear, the structure could be flattened into only two layers, i.e. with algebraic substitution. I am left confused about what mathematical relationship exists between nodes in a real neural network.

  • @vkulanthaivel
    @vkulanthaivel 8 ปีที่แล้ว

    It says '125 years of age' but the oldest person was 122! (In the pideo) - Pideo means Picture video.

  • @LeandroR99
    @LeandroR99 7 ปีที่แล้ว

    Haven't noticed the repeated right until I've read the comments. Content and explanation are more interesting than accent and language vices.

  • @MichaelKondrashin
    @MichaelKondrashin 8 ปีที่แล้ว

    So how deep learning differs from neural networks?! From the description it is same thing.

  • @MultiPaulinator
    @MultiPaulinator 8 ปีที่แล้ว

    So, the tl;dw of it is that deep learning is basically regular machine learning put through a recursion loop.

  • @ddbrosnahan
    @ddbrosnahan 8 ปีที่แล้ว

    How long until our genetic information is uploaded and computers determine our genetic value?

  • @rabbitpiet7182
    @rabbitpiet7182 6 ปีที่แล้ว

    ‘But what the current hotness is on any particular site is kinda “i don’t know” and always will be’

  • @charak100able
    @charak100able 6 ปีที่แล้ว

    excellent explanation! finally I understand how to interprate the intermediate layers. thanks!

  • @1cy3
    @1cy3 8 ปีที่แล้ว

    3:39 though it sounds like he's saying "overfeeding" he actually means "overfitting".

  • @KohrAh
    @KohrAh 8 ปีที่แล้ว +4

    It's not one of the best videos, right?

  • @frull3
    @frull3 8 ปีที่แล้ว +6

    Perfect drinking game, every time he say right ... SHOT !

  • @jpclk1204
    @jpclk1204 6 ปีที่แล้ว

    A right deep learning algorithm may say that the right title for this video may be "Between right's - ComputerRight"

  • @StealthMaster123
    @StealthMaster123 8 ปีที่แล้ว

    Didn't notice that he always says right until I read the comments. Thank you guys....

  • @jessicap2589
    @jessicap2589 8 ปีที่แล้ว

    I feel like I use this channel to practice mindfulness.

  • @ViktorLox
    @ViktorLox 8 ปีที่แล้ว

    Right?

  • @therflash
    @therflash 8 ปีที่แล้ว

    The beginning of the video is misleading, Sean asks if you just sum it up to get a big number, but that's not what happens. You sum the values Zx*Ax, that means you multiply each feature Zx by some (initially random) value Ax and sum THAT, so you end up with some predicted price for each of your examples. Then, you look how far this prediction is from reality and that gives you error value for that example. Then you average your error values. Then slightly increase/decrease value A1 in a way that will decrease the average error. Then you do the same for A2. Then A3, A4, A5 etc.. Knowing derivatives of your average error with respect to Ax will tell you which direction each Ax has to be moved in order to decrease the average error, so you need to do that before you actually start moving those Ax's. After you move all Ax's by a tiny bit, you make new predictions and now those predictions should be closer to reality. You calculate new errors and new average error, this time your average error is hopefully smaller. And again, you calculate derivatives of average error with respect to each Ax and you move each Ax in a way that will decrease the average error. You keep doing this over and over till your predictions are reasonably close to reality or you fail miserably and realize that you need to square/multiply/log/scale/add/remove/.../.. some of your features, buy more computers, drink more coffee, get more data, dance during full moon, sacrifice goat, etc...

    • @Vulcapyro
      @Vulcapyro 8 ปีที่แล้ว

      +therflash Yeah, he didn't really even go into the basics of how this works. He uses the example of regression because that's something he knows, but I don't think this video actually got much across besides maybe why hidden layers _might_ be a good idea. High-level videos are fine, but it's important if you're actually trying to present the subject to learn, to have something more Brailsford-like in its clarity and straightforwardness of delivery, even if the content is slowly paced.

  •  8 ปีที่แล้ว +19

    Should have instead been three videos: machine learning, neural networks, and only then deep neural networks (aka deep learning). As it stands, this video fails to explain the topic adequately.

    • @Red_Salmond
      @Red_Salmond 6 ปีที่แล้ว +1

      You do better then mister know it all.

  • @juliank7408
    @juliank7408 ปีที่แล้ว

    Thanks for this intuitive explanation!

  • @sau002
    @sau002 7 ปีที่แล้ว

    I like the dialogue based approach .

  • @427FOX427
    @427FOX427 8 ปีที่แล้ว

    I don't know why but the sound of the marker irritates me so much! I love watching these videos but I have to lower the sound. Anyone has the same problem?

    • @zoranhacker
      @zoranhacker 8 ปีที่แล้ว

      I think that's a pretty common thing, sometimes just thinking about the marker writing on paper gives me chills

  • @ToyotomiHideyoshiGre
    @ToyotomiHideyoshiGre 8 ปีที่แล้ว

    I recommend "Preferred Networks".

  • @Misterlegoboy
    @Misterlegoboy 8 ปีที่แล้ว

    i had an idea like this only it works with logic gates

  • @RBYW1234
    @RBYW1234 5 ปีที่แล้ว

    I got 4 tabs up, they all deep learning and i don't wanna exit them....

  • @tylermassey5431
    @tylermassey5431 5 ปีที่แล้ว

    They are missing two very important data points the buyer and the seller

  • @federook78
    @federook78 ปีที่แล้ว

    Yes, it's very clear. That it is a second language, that is.

  • @FrankKitteh
    @FrankKitteh 8 ปีที่แล้ว

    Protip: try to formulate some notion of a bridge between traditional programming and neural network training, then drink a shot of tequila (or anything above 35% ABV) before watching.
    I think I need another 30 hours to brain what I just brained while watching this.

  • @commanderlake7997
    @commanderlake7997 8 ปีที่แล้ว

    Did anyone count how many times he said right?

  • @josnardstorm
    @josnardstorm 8 ปีที่แล้ว

    Did anyone else start to see that final web as a 3D shape?

  • @adelbibi1030
    @adelbibi1030 8 ปีที่แล้ว

    I can see CVPR written on the board. : - )