Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 53

  • @weekendresearcher
    @weekendresearcher ปีที่แล้ว +6

    Prof Hinton is standing the whole podcast due to his back problem. A cruel irony of nature played on this gifted person who gifted us the practical aspects of back propagation..🙏

  • @michaelvonreich74
    @michaelvonreich74 2 ปีที่แล้ว +26

    Hinton is an amazing teacher. I am taken back to 2019, when I first read the paper on distillation. Instead of any technical jargon, it starts with how insects have a larval form suitable for harvesting nutrients, and an adult form for movement and reproduction. He then drew the connection: maybe deep neural nets need to have different forms for training and inference! Instantly I was spellbound.

  • @binjianxin7830
    @binjianxin7830 2 ปีที่แล้ว +4

    It’s so brilliant when people were tricked into believing Jeff’s jokes. Thank you so much for the authentic and inspiring conversation!

  • @anonymous.youtuber
    @anonymous.youtuber ปีที่แล้ว +2

    This interview is truly beautiful, interesting, clarifying. 🙏🏻❤️

  • @josy26
    @josy26 2 ปีที่แล้ว +14

    man I would've loved to be a student of Hinton

  • @fredzacaria
    @fredzacaria ปีที่แล้ว

    very informative thanks,
    @23:23 how can we use spike timing algorithms for predictions?

  • @jayp6955
    @jayp6955 ปีที่แล้ว +6

    Interesting talk for sure, worth the whole watch. I had the fortune of chatting with Hinton after I cold-emailed him with a theory based on my undergraduate physics-neuroscience work in 2013, I remember him being a witty guy with great intuition. It's nice to see him interested in approaches other than backprop; ML needs a radical algorithm shift if it's going to get past the current plateau we're seeing with processing/data costs and model uncertainty. To me, these are dealbreakers and reason enough to explore everything all over again. Hinton's intuition that one-shot learning (many params, little data) is the goal of new first-principles approaches is sound; the current state of backprop is quite the opposite (small params, lots of data).
    The interviewer did a good job with the questions, focusing on spiking networks and low-power hardware -- Hinton is right that hardware will be the endgame in this industry. However, hardware design will need to be deeply influenced by algorithm certainty. The current game is to determine the correct software for learning, then optimize it in hardware. It will be a black-swan event; as soon as someone discovers "the next backprop", hardware production will blow up within a few years. It's likely that traditional ML is headed down a rabbit hole with no carrots at the bottom -- top minds in the industry are spending valuable time studying and characterizing "laws" for systems that do not have the power to come close to AGI. New approaches are needed. It's a shame we don't have people like Von Neumann alive today who can show us the way, but I'm optimistic that Hinton's head is in the right direction.
    If you're interested in ML research, the best thing to be working on right now is whatever everyone else isn't talking about. In other words, understand ChatGPT -- then promptly move on. AI today reminds me of physics in the 1890s, where the research community made so much progress in classical and statistical mechanics, but quantum mechanics and relativity were on the horizon, waiting to shake up the world.

  • @samuraijosh1595
    @samuraijosh1595 2 ปีที่แล้ว +2

    I just recently learned that this guy is a descendant of george boole himself.....mind = blown!!!!

  • @prabhavkaula9697
    @prabhavkaula9697 2 ปีที่แล้ว +6

    This episode is awesome!!! Thank you for the interview sir.

  • @yuanli6224
    @yuanli6224 2 ปีที่แล้ว +2

    WOW, really moved by the "faith" part, how much science was pushed forward by those lonely heros !

  • @mosca204
    @mosca204 หลายเดือนก่อน

    Awesome thanks for you podcasts

  • @LongTail8443
    @LongTail8443 2 ปีที่แล้ว

    I knew it! all the answer is here, I just have to know you guys or be a native English, then keep studying. Love you guys so much, please keep doing this.

  • @pw7225
    @pw7225 2 ปีที่แล้ว +5

    Love Hinton's comment on Russia and Ukraine being different countries :)

  • @ninatko
    @ninatko 2 ปีที่แล้ว +5

    I have missed hearing professor Hinton talking without even realizing it :D

  • @michaelvonreich74
    @michaelvonreich74 2 ปีที่แล้ว

    Wait is there any way to hear what was said after 10:50? : (

  • @andrewashmore8000
    @andrewashmore8000 2 ปีที่แล้ว

    Fascinating man and interview. Thanks for sharing

  • @akash_goel
    @akash_goel ปีที่แล้ว

    Can't believe this channel has ads enabled.

  • @BritishConcept
    @BritishConcept 2 ปีที่แล้ว +2

    Fantastic interview. I especially enjoyed the part about how Hinton ended up at Google. I'm looking forward to part 2.
    How are you going to get Alex Krizhevsky for the season 3 finale? Trap him in a large net perhaps? 😉

  • @minikyu5643
    @minikyu5643 2 ปีที่แล้ว +2

    Thank you very much for the interview, so many deep insights.

  • @deng6291
    @deng6291 2 ปีที่แล้ว +1

    Very insightful!

  • @bellinterlab8139
    @bellinterlab8139 ปีที่แล้ว

    What is happening here is that the whole world gets Geoff as their thesis advisor.

  • @sdmarlow3926
    @sdmarlow3926 2 ปีที่แล้ว

    That's like saying, to understand everything that is going on in a data center, you just need to understand how the transistor works.

  • @user-lk6ik3sc9l
    @user-lk6ik3sc9l 2 ปีที่แล้ว

    Can't wait to listen to this - thanks Pieter!
    P.S.: Feel free to hop to the other side and be a guest on my show :)

  • @stuart4003
    @stuart4003 2 ปีที่แล้ว +1

    Brainchip's Akida neuromorphic commercial IP implementation uses spiking neural network technology.

  • @SiFangWu
    @SiFangWu 2 หลายเดือนก่อน

    有好奇心的驱动才能做出最好的基础研究

  • @nootherchance7819
    @nootherchance7819 2 ปีที่แล้ว

    Honestly, I had to google a bunch of terms to understand what our legendary Geoff Hinton was talking about 🤣. Thanks a bunch for this, really enjoyed the latest set of guests you've interviewed lately! Keep up the good work!

  • @prabhavkaula9697
    @prabhavkaula9697 2 ปีที่แล้ว

    I was waiting for the deep reinforcement learning and agi question :)

  • @brandomiranda6703
    @brandomiranda6703 2 ปีที่แล้ว

    I don't get it. Why is backprop given credit for current models if it's just a dynamic programming technique for gradient computation. It's SGD doing the true magic IMHO.

    • @brooklyna007
      @brooklyna007 ปีที่แล้ว +1

      This is an odd statement. If backprop is just another DP technique then SGD is just another non-linear optimization technique. Every X is just another Y technique if you don't care to look at the work it took to get there, the space of other techniques that were searched, what it took to figure out this was the best model, etc. Hindsight is 20/20. This is like looking at modern particle physics and asking "Why is Noether's theorem given so much credit if it is just another theory about symmetries? IMHO it is Gauge theory that is magic and truly describes particle physics". Or maybe more down to earth: "Why is the Fourier Transform given so much credit if it is just an integral? IMHO it is the Fast Fourier Transform that is magic and processes all of our signals". In case I am being too obtuse, for a pure programmer without math experience: "Why is binary search given so much credit if it is another recursive search. IMHO red-black trees are the true magic and what almost all tree-map implementations use". Backprop can surely be related to other algorithms and mathematical structures, but it doesn't reduce its importance. That is more about the overall system it fits within.

  • @hoang._.9466
    @hoang._.9466 2 ปีที่แล้ว

    thank u helped me a lot

  • @tir0__
    @tir0__ 2 ปีที่แล้ว

    2 gods in one frame

  • @MLDawn
    @MLDawn ปีที่แล้ว

    Free Energy minimization is the key!

  • @nineteenfortyeight
    @nineteenfortyeight ปีที่แล้ว

    Damn, Hinton still looks good

  • @panFanny
    @panFanny ปีที่แล้ว

    wow amazing

  • @munagalalavanya3331
    @munagalalavanya3331 2 ปีที่แล้ว

    You worth more than a 1 billion

  • @willd1mindmind639
    @willd1mindmind639 2 ปีที่แล้ว

    I think that it is unfair to take on the burden of trying to tackle all of the challenges required in order to build something that is not even well defined in the first place, like "Artificial Intelligence". There is no single definition of it so how can you say you have it or, are near to it or revolutionizing it. As it stands "Artificial Intelligence" is simply a subset of computer software (written by humans of course) where instead of explicitly writing all the conditional logic to detect and classify things in advance, the software does it automatically using data in statistical models. But that is not necessarily how the brain does it because computers, as systems designed to operate on fixed length binary types, are nothing like the brain. In fact, the brain is a bio-mechanical system where cells operate on chemical signal activation and transfer based on genetic blueprints for cellular specialization. What makes them more efficient is that bio-chemical processing is more of a mechanical operation than an algorithmic operation, based on pre defined bio-chemical signal activations and outputs. Meaning the chemical signals themselves are discrete and don't require a bunch of extra "work" to identify and process as each cell is a mini machine designed to specifically operate on specific chemical compounds. So because of that it is not an "algorithmic" process where you have to try and identify those signals using algorithms because the input data itself is all encoded as the same binary fixed length type system as all other data. And because these fixed length types are based on numeric systems like decimal or base 10 numbers, have no specific meaning explicitly tied to an 'activation' in a neural network, any use of said types requires complex algorithms to define, characterize and detect such signals within a complex mathematical framework. Those algorithms in turn require large amounts of data and cpu processing to produce anything meaningful as a measure of work and energy spent. That doesn't mean you can't do things with these algorithms, but that doesn't mean they work like humans do either or that silicon based processors are any closer to being like the brain, because they aren't.

    • @vrushankdesai715
      @vrushankdesai715 2 ปีที่แล้ว

      It does not seem to you to be too big of a coincidence that artificial neural networks, inspired by the human brain, happen to work wayyyy better than anything that has came before? Yes, the implementation details are much different than in biological systems, but the core concept (storing & processing information in a highly distributed interconnected graph network) is the same.

    • @willd1mindmind639
      @willd1mindmind639 2 ปีที่แล้ว

      ​@@vrushankdesai715 Its not the same because in biology each signal is encoded using specific chemical compounds that are discrete. And all "processing" happens at that level which is akin to "bottom up" processing of very finely detailed discrete elements. So those "neuron networks" are operating at a far lower level of detail than machine neural networks. For example, when light waves get converted in biological vision systems, each color is given its own discrete biochemical signal resulting in imagery being composed of many sub networks of detailed collections of related colors. Those detailed networks then get passed into higher order parts of the brain where those networks get associated with patterns, features, textures and ultimately objects at the high level. And there is no extra "work" required to get that level of detail and hierarchical embedding of relationships. Whereas in a computer vision system you start with a file which is just a bucket of binary numbers and you then have to do work to make sense of what those numbers represent and at nowhere near the same level of detail or segmentation as biological vision. And the only reason that is the case right now is because most machine learning algorithms are designed to work like java which means portable code that can work on any kind of general purpose architecture. So there are trade offs in doing it that way versus having very specialized architectures with custom data types for encoding light information (not as simple R,G,B) and so forth. That fundamental difference between how computers work and how nature works is not trivial is what I am saying.
      For example, look at how sea creatures with the ability to dynamically change skin color and texture work. That is biology encoding textures and patterns but as opposed to it being for internal use, it is for external use.

    • @vrushankdesai715
      @vrushankdesai715 2 ปีที่แล้ว

      @@willd1mindmind639 what you just described is exactly how convolutional neural networks work though. Lower layers recognize lines/edges and as you get deeper the embedding is of higher levels of abstraction. Until the final layer spits out classification predictions

    • @willd1mindmind639
      @willd1mindmind639 2 ปีที่แล้ว

      @@vrushankdesai715 No it is not. Just as an example, imagine an AI based image editor. Ask that image editor to just display the color red. It can't do it, because there is no discrete encoding for red within a neural network. Just like there is no encoding for green or blue. What you are talking about is a very high level abstraction of a "neuron network" which in biology is a physical set of biochemical relationships based on discrete activations. Which is why the brain can easily pick out the red parts of an image no problem because each color is in its own distribution within neuron networks. And those neuron networks represent a much more detailed collection of relationships at a much higher level of detail than a convolution network. Remember a convolution is nothing more than a mathematical operation applied to all elements within a collection, such as a collection of pixels. That is how you get gaussian blur. But that mathematical operation requires work to even try and distinguish red pixels from blue pixels or sharp lines from gradients. That level of detail is provided in the brain mostly for free because of the bottom up architecture of how biology works with discrete encoding of information using chemicals. There is no "work" to disentangle one color from another using anything like a mathematical convolution algorithm.
      There are no convolutions in a fishes' dynamic optical camouflage.

  • @alansancherive7323
    @alansancherive7323 ปีที่แล้ว

    14

  • @pierreshasta1480
    @pierreshasta1480 ปีที่แล้ว

    a single poseidon torpedo can destroy an entire country like the united kingdoms, it's stupid to compare it to traditional torpedoes.

  • @gabbyafter7473
    @gabbyafter7473 2 ปีที่แล้ว +1

    Definitely need lex to do this

    • @username2630
      @username2630 2 ปีที่แล้ว +6

      Im sorry but Lex isnt even close to Pieter in technical knowledge, this interview goes to the content at the right level

    • @gabbyafter7473
      @gabbyafter7473 2 ปีที่แล้ว

      @@username2630 okay

    •  2 ปีที่แล้ว +7

      Lex podcast derailed for me when he started moving away from ai and started hanging out too much with right wing bs

    • @Daniel-ih4zh
      @Daniel-ih4zh 2 ปีที่แล้ว +3

      @ true, we need to hear more from men dressing up as women.