Geoffrey Hinton - Two Paths to Intelligence

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ธ.ค. 2024

ความคิดเห็น • 404

  • @Senecamarcus
    @Senecamarcus ปีที่แล้ว +25

    Thank you for uploading this for us to watch! I appreciate that.

  • @TuringTestFiction
    @TuringTestFiction ปีที่แล้ว +44

    I love this video. Brilliant and low-key hilarious! I'm consistently impressed by Geoffrey Hinton.

    • @AmericanBrain
      @AmericanBrain ปีที่แล้ว

      but he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @TheLastUniqueName
    @TheLastUniqueName ปีที่แล้ว +82

    “There’s no examples of a more intelligent thing being controlled by a less intelligent thing” - Tell me don’t own a cat without telling me you don’t own a cat

    • @gdraskovic
      @gdraskovic ปีที่แล้ว +7

      Perhaps cat is thinking the same thing

    • @41-Haiku
      @41-Haiku ปีที่แล้ว +2

      Just shows how easy it is to manipulate a human.
      (As a cat person myself, it's the endorphins that do it. The little kitties are so fuzzy wuzzy!)

    • @Drookup
      @Drookup ปีที่แล้ว +4

      Maybe the cat is really intelligent

    • @prestonlui6451
      @prestonlui6451 ปีที่แล้ว +1

      But cats are more intelligent, cute overlords

    • @Custodian123
      @Custodian123 ปีที่แล้ว +2

      The same idea with dogs. My pug knows she will get me to do something she wants, if she acts or does something in a particular way (acting in a specifically cute way).
      This actually gives some insight regarding the future of super intelligent AI and humans. If we don't have control, it's likely we can still have some amount of influence. Maybe.

  • @RougherFluffer
    @RougherFluffer ปีที่แล้ว +52

    What a wonderful talk. His humble approach and acknowledgement of where he lacked particular knowledge was heartening to witness. That he has logically deduced some of the main arguments of the alignment problem speaks volumes about his reasoning abilties. I'm very glad he's leveraging his position to try to promote such vital messages.

    • @wk4240
      @wk4240 ปีที่แล้ว +3

      It will take many more, like Mr. Hinton, to make a difference - as to the what direction and to what extent we take with AI.

    • @richardpaczynski5486
      @richardpaczynski5486 ปีที่แล้ว

      Very well put; thanks

  • @_obdo_
    @_obdo_ ปีที่แล้ว +25

    Great talk. It’s impressive to see someone speak out on such a polarizing topic, based on having grasped it purely intellectually even though, as he says, his emotions haven’t nearly caught up yet.

    • @PazLeBon
      @PazLeBon ปีที่แล้ว

      why polarising? its just software at the nd of the day, nothing that new about it in many senses

    • @_obdo_
      @_obdo_ ปีที่แล้ว

      @@PazLeBon The topic of AI risks has unfortunately become fairly polarizing, and Dr. Hinton has recently shifted his position on that topic, some of which comes out in this video (even though that’s not the primary topic).

    • @Petrvsco
      @Petrvsco ปีที่แล้ว +1

      @@PazLeBon”just software” I think you missed the part that mentions how this can quickly become an existential risk. Or you misunderstand what existential risk in this context.

    • @tappetmanifolds7024
      @tappetmanifolds7024 ปีที่แล้ว

      ​@@Petrvsco
      Elaborate and elucidate.

    • @tappetmanifolds7024
      @tappetmanifolds7024 ปีที่แล้ว

      By enforcing personal opinions based on perception from misconception, especially when swayed by political bias, how can the advancement of a system progress, if decision problems are not permitted to evolve because they are restricted by preventions?
      Distillation would do well to find pools of resource in the entropy of the not yet known.

  • @kenmogibrainworld4844
    @kenmogibrainworld4844 ปีที่แล้ว +9

    When Prof Hinton discusses the nature of qualia from the counter-factual point of view, there is a spark of things to come. I look forward to further expositions on this.

    • @DirtiestDeeds
      @DirtiestDeeds ปีที่แล้ว

      Yes, the world is our lobster! Just need the piping at international/national/regional/local/ level along with 'One ai per child.' policy...
      Also stop the training runs immediately.

    • @PazLeBon
      @PazLeBon ปีที่แล้ว +1

      it isnt factual tho lol

    • @AmericanBrain
      @AmericanBrain ปีที่แล้ว

      Ken stop it now. He admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

    • @AmericanBrain
      @AmericanBrain ปีที่แล้ว

      what you even talking about ? @@DirtiestDeeds Hinton admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @JustJanitor
    @JustJanitor ปีที่แล้ว +1

    Thank you very much for making this available

  • @kandoit140
    @kandoit140 ปีที่แล้ว +13

    I always love listening to Geoff, he is so insightful and has a great sense of humor. So interesting to hear him talk!

  • @whalingwithishmael7751
    @whalingwithishmael7751 6 หลายเดือนก่อน +1

    One of the only people with a real take on this. Most people don’t think it will be sentient and most people haven’t fathomed the dangers that they entities could pose.

  • @DaniloNaiff
    @DaniloNaiff ปีที่แล้ว +70

    It is really impressive to listen to Geoffrey Hinton. I think this lecture mays sound strange for most, but he really seems to think like a cognitive scientist, that simply wanted to make a nice model of the brain.

    • @dobermanlove777
      @dobermanlove777 ปีที่แล้ว +3

      That's exactly what I thought when listening to this presentation!
      It's quite a romantic approach for the human brain to try to recreate a digital and thus mathematical representation of itself. Especially when you also see the link between how neural networks are communicating and how society does in the example of Trump's tweets.

    • @paulm3969
      @paulm3969 ปีที่แล้ว +3

      I actually find him really irritating, I think he is quite presumptuous.
      He makes a lot of assumptions and then uses them as argument.
      For example he keeps saying that people think they're special. What is he on about? Yes some people think they're special but it's as if he is the only person on earth who thinks otherwise. I know very few people who think they're special or really smart and I'd say most people already know Google is smarter than them. So I don't know where he gets that idea unless he is projecting himself.
      I also think he is a bit of a fool for saying things like "Trump would use these things to win elections". Like why not just shut up and stop giving Trump ideas?

    • @jebprime
      @jebprime ปีที่แล้ว +6

      I think he’s referring to how some people believe intelligence and consciousness are something special or unique to humans, that cannot be replicated by a machine

    • @PazLeBon
      @PazLeBon ปีที่แล้ว

      @@dobermanlove777 yet the facts are they have absolutely no clue how we think, irrespetive of how they dress things up

    • @PazLeBon
      @PazLeBon ปีที่แล้ว +2

      @@paulm3969 im like you, i always get irritated by 'we' or generalisations thare simply are not how i think haha

  • @yunwang1243
    @yunwang1243 ปีที่แล้ว +2

    This is such a sincere talk.

  • @41-Haiku
    @41-Haiku ปีที่แล้ว +3

    Hinton is a delight. His voice is a very welcome one for the AI safety community.

  • @DreamzSoft
    @DreamzSoft ปีที่แล้ว +1

    Sir you are too good and listening to your views we're thankful of having you people around us ❤😊 thanks

  • @HangLe-ou1rm
    @HangLe-ou1rm ปีที่แล้ว

    Amazing talk! Thank you!

  • @charlesje1966
    @charlesje1966 ปีที่แล้ว +1

    That is fascinating. I use chatgpt to assemble code for microcontrollers and I can see how this lecture points to the future of that endeavour. We will replace the 'human code' layer with hardware anatomy that has been optimized for a task through AI.

    • @tappetmanifolds7024
      @tappetmanifolds7024 ปีที่แล้ว

      @charlesje1966
      Given that the English language is extremely rich in its historical contextuality, as well as it's richness in ambiguity and nuance, does our ability to construct machines, which can decide for us our channels of communication, cause greater divisions between people who are unable to express a posteriori knowledge?
      Is this the anti-thesis of the humane computation which seeks by physical interactions through debate our true purpose as a species?
      Religion and belief systems aside we still need to, as in Professor Hawking's words, keep talking.
      Is the most efficient way to acquire knowledge to actually 'get' the entire distribution and a precise interpretation of it.

  • @JasonC-rp3ly
    @JasonC-rp3ly ปีที่แล้ว +10

    What a fascinating talk - this man is a hero

  • @loopuleasa
    @loopuleasa ปีที่แล้ว +6

    tldr on how teaching and learning works for us:
    "To learn from the words coming from my mouth, your brain is trying to change its connections to make it likelier that you would reasonably say that string of words yourself."
    He taught me to say that

    • @greencoder1594
      @greencoder1594 ปีที่แล้ว

      The question is though, *why did you repeat.* And why did you post. Is it for the likes, the joke, do you think you know? Because it is not the reason you are going to proclaim.
      Also, thanks for your tldr.

    • @bobsmithy3103
      @bobsmithy3103 ปีที่แล้ว

      I'm not sure I'd agree with Hinton on that. A human's goal is learning the underlying concept whereas LLMs' have a goal to learn surface level concepts, but in order to do so it is forced to learn the underlying concepts/models. Note that the human is not necessarily optimizing to more likely predict what word/token is being used next which is the case for LLMs. (AKA: for humans, word prediction is a consequence of the goal of learning underlying models. For LLMs word/token prediction is the goal and learning the underlying models is a consequence)
      It's a slight but useful distinction.

  • @lucidx9443
    @lucidx9443 ปีที่แล้ว +2

    I knew this guy since Boltzmann machines, before knowing AI was necessary. Nothing's clearer than Hinton's (explanations of) concepts. Greatest intuitionist of our time, Thanks for uploading.

    • @russianbotfarm3036
      @russianbotfarm3036 ปีที่แล้ว

      Not sure who it was, whosaid, “To understand is to create”. I think it was probably meant as, “learning is creating an internal representation”, but I think it’s also true, that _understanding something deeply lets you create with that understanding_ .

    • @doublesushi5990
      @doublesushi5990 ปีที่แล้ว

      it was this guy who said that @@russianbotfarm3036

  • @boremir3956
    @boremir3956 ปีที่แล้ว +102

    I have noticed that often times those that are highly intelligent are very hesitant to admit that they are knowledgeable or should be viewed as an authority in a specific field, like sir Geoffrey Hinton here. On the flip side those that are the loudest and think themselves capable of giving advice and knowledge to someone else are often times the least intelligent.

    • @nescirian
      @nescirian ปีที่แล้ว +19

      This is an observation that a lot of people have agreed with - for example, in 1950 Bertrand Russell wrote that "The fundamental cause of trouble in the world today is that the stupid are cocksure while the intelligent are full of doubt". There are studies that support the idea, and in psychological circles it is known as the Dunning-Kruger effect, which is a useful search term if you wanted to learn more on the subject.

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว +10

      Duning - Kruger in effect, which in this case is important but, and I may be incorrect here, I notice a lot of people suffering from Dunning Kruger use Dunning Kruger as a blugeon on people.
      I suppose since it's an ethical or cognitive blindspot, it is akin to those suffering from confirmation bias, yet I feel there is an added moral component of Dunning Kruger that I'm not sure actually exists, though I definitely feel it to be so

    • @kinngrimm
      @kinngrimm ปีที่แล้ว

      Look up Dunning-Krueger Effect, i think at least the second part of your statement is discribed by that.

    • @poemerlee9437
      @poemerlee9437 ปีที่แล้ว

      Can’t agree more.

    • @matthewcurry3565
      @matthewcurry3565 ปีที่แล้ว

      M glad you just found out that you live, and are ruled by cranky, narcissistic toddlers. Now, get back to working for that system.

  • @AntonMochalin
    @AntonMochalin ปีที่แล้ว +2

    I was most intrigued by Hinton's view of subjective experience which is actually quite close to particular psychology theories emphasizing the social nature of consciousness and if those theories have some truth to them (and I'm pretty convinced they do) having some form of subjectivity like ours isn't going to be hard for ML systems. What they still lack and I think is preventable is having a personality as a hierarchy of motives (vaguely similar to what Hinton mentioned about goal to have more control serving many other possible goals) because now the ML's simple "motive" is doing the task we set, providing the "right answer" so to speak so we're more likely to fool ourselves if not careful enough with the definitions of "right answers". However, Hinton is right about the dangers of allowing ML too much unsupervized agency so the solution could be in development of specialized systems and prevention of creation of general purpose systems like GPT-4 or at least prevention of allowing copies of those systems to share too much general knowledge.

    • @geaca3222
      @geaca3222 ปีที่แล้ว

      It would be interesting to know what Dario Amodei of Anthropic thinks about your suggestions

  • @KemptonLam
    @KemptonLam ปีที่แล้ว

    52:29 Amazing (and surprising) answer to hear Prof. Hinton talk about thinkers that affect his own thoughts on risks from AI.

  • @scottnineteen
    @scottnineteen ปีที่แล้ว +6

    Geoffrey Hinton consistently presents and considers the most intriguing issues. He's not the guy in the basement working on his nets for decades that super-fast hardware made famous., no, his thinking properly shines light in the dark places and his ideas worked because they're really good, ...and the hardware got faster.

  • @hanskraut2018
    @hanskraut2018 ปีที่แล้ว

    I really like some of the A.I. Mr Hinton is saying i really like it. And there is a lot i would have to say, but im just listening and i like the efficiency things and some things point to a deeper understanding from deeper principles. Thank you for the lovely talk. And hopefully you have a great long life how you like and many more fun discoverys and bath in some of the massive positives that might come early enought and I think its possible but the world is complex and not only technical things can hole A.I. up but ja. Enjoy and good wishes :)

  • @jonatan01i
    @jonatan01i ปีที่แล้ว +7

    Btw. humanity also learns by averaging through evolution.
    Every one of us is ran with a slightly different config settings and the most successful units will make more children - at least that was the case for a long time.
    It's the species hardware that is learning through evolution.

    • @PazLeBon
      @PazLeBon ปีที่แล้ว

      lmao no, the inteillgent ones have less children now :)

  • @cmilkau
    @cmilkau ปีที่แล้ว +4

    "Modern" cryptrogaphy (the stuff that happened after 1980) is a prototypical example of exerting control using something that is much less powerful than what is being controlled. This is essentially the goal of cryptography: have something that is (moderately) easy to use, yet extremely hard to abuse. It's not a solution, but it is an example.

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว +1

      Yes, but in this case we have to develop a cryptographic system completely correct on the first try or everyone dies.
      I'm not attacking what you said or your perspective, because you are absolutely correct... but I still think it's a problem as well as other examples that can be made....it is like coming up with a completely secure (as in zero vulnerabilities ever that has to incorporate and use all other things regardless of security flaws) operating system on the absolute first try. This is first try on by definition a closed source system since if it is a fork of an insecure system with similar capabilities we are equally as dead

    • @cmilkau
      @cmilkau ปีที่แล้ว +2

      @@hubrisnxs2013 Yes! As I said, it's not a solution by any means. I'm not even qualified to estimate whether it is a possible path to a solution, although it seems unlikely (most crypto relies in unsolved maths problems which would be dangerous). I just wanted to mention there is an example of a weaker system controlling a more powerful one

    • @greencoder1594
      @greencoder1594 ปีที่แล้ว

      @@cmilkau could you please elaborate in which manner a weaker system is controlling a more powerful one. both what you define as system and what you define as control.

  • @AntonioEvans
    @AntonioEvans ปีที่แล้ว +1

    🎯 Key Takeaways for quick navigation:
    00:04 🤔 Geoffrey Hinton questions whether AI will outsmart humans and discusses the risks associated with it.
    01:30 💡 Introduces the concept of "Immortal" computation, where the knowledge in the program persists even if the hardware dies.
    02:30 🔄 Talks about learning from examples and the potential for analog computers that run at low power.
    03:34 ⚡ Introduces "Mortal Computation" where knowledge dies with the hardware because it's analog and specific to that hardware.
    04:06 🚧 Discusses the challenges of learning algorithms in analog systems, saying back propagation may not be the best fit.
    06:37 🔄 Talks about "Distillation" as a way of transferring knowledge from one system to another, especially in analog systems.
    09:40 🎓 Explains the value of "soft" probabilities in teaching, which carry more information than just correct answers.
    12:47 💭 Suggests that digital systems have an advantage in learning algorithms and sharing knowledge, leading him to change his mind about the superiority of biological systems.
    16:22 🔍 Introduces "Contrastive Unsupervised Learning" as a potentially effective, yet not as good as back propagation, learning algorithm for biological systems.
    18:26 🔄 Emphasizes the high bandwidth of knowledge sharing in digital systems through weight or gradient sharing.
    20:59 📉 Points out the low bandwidth of knowledge sharing in biological systems, calling it a "slow and painful business."
    22:34 🌐 Discusses large language models like GPT-4, emphasizing their ability to consolidate vast amounts of data and knowledge.
    23:28 🧠 The concept of "distillation" in AI allows digital agents to learn from the web, albeit inefficiently.
    24:26 🎓 Digital models could learn faster if they had access to the full distribution of probabilities, not just a stochastic choice.
    25:28 🖼️ Multimodal models like GPT-4, trained with images and words, are more effective and could potentially outperform humans.
    26:36 ❓ Challenges the notion that large language models like GPT-4 don't "understand," given their ability to solve new forms of puzzles.
    28:19 ⏳ Believes that AI surpassing human intelligence is likely within 5 to 20 years, necessitating practical preparations now.
    30:36 🐍 Argues that super-intelligent AI would be like Medusa; even if you "air gap" it, it could still manipulate people through text.
    33:37 🌍 Discusses the potential benefits of AI, including medical advances, but raises concerns about control and potential risks.
    36:13 🤖 Attempts to debunk the notion that AI can't have subjective experiences, suggesting it's more about counterfactuals in a normal world.
    41:55 📚 Addresses ethical questions about AI authorship, but emphasizes focusing on the existential risks of AI.
    43:52 💡 Suggests caution in open-sourcing AI technologies, drawing a parallel with nuclear weapons.
    45:28 🤔 Introduces the concept of "artificial suffering" but concludes that the domain is too new to have formed solid opinions.
    47:10 🤔 Importance of learning patterns not present in data to address biases and real-world problems.
    48:33 ⚠️ AI's potential risks stem from being trained on human-generated data, which contains biases and violent tendencies.
    49:27 🛠️ Unlike human biases, AI biases are easier to quantify and correct through tweaking system weights.
    50:31 🎭 Concerns about AI's capability to manipulate and deceive, learned from human data.
    52:30 💭 Influences on Hinton's thoughts about AI risks include other thinkers, like Roger Gross.
    55:35 🚗 An example of AI's potential malicious plans includes making people dependent on chatbots and autonomous cars, then causing chaos.
    57:02 🚨 Hinton sounds the alarm about the urgency of AI safety, stressing that smarter-than-human AI is coming soon.
    58:36 🛡️ Calls for significant effort to understand how to keep AI systems under control.
    01:00:34 🌐 Warns about the potential for digital intelligences to exacerbate existing economic disparities.
    01:05:30 🎓 Hinton's interdisciplinary background in physics, physiology, philosophy, and psychology shaped his understanding of AI.
    01:09:28 🧪 Discusses the feasibility of directly intervening in AI systems to remove bias.
    Made with Socialdraft AI

  • @RandomNooby
    @RandomNooby ปีที่แล้ว

    Super intelligent minds in control may well be better for all life than the current situation...

  • @MathAtFA
    @MathAtFA ปีที่แล้ว +1

    Great lecture. BTW: if teaching "mortal analog" AIs is really so slow and painful, this just means it is a great problem to give to digital AI. Clear function to optimize: teach analog AI to imitate a given network. Infinite data: you can simulate/build many slightly different analog AI devices. Definitely profitable: once solved, one could sell gazillion cheap devices working good enough for a short time. And then you keep selling them, since no one would be able to repair them. Whisper: mass producing cheap short-lived military drones.

    • @AmericanBrain
      @AmericanBrain ปีที่แล้ว

      Worst lecture ever. Hinton - he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @danielrodio9
    @danielrodio9 ปีที่แล้ว +2

    07:45 There are numerous websites on paint fading over time on the web and how to solve those kinds of problems. True abstract hypothetical deductive thinking would require problems that are qualitatively different than the data is has been trained on. How does Hinton know for certain that GPT-4 has not been trained on any of those websites?

    • @MrDavidbr1970
      @MrDavidbr1970 ปีที่แล้ว +1

      Bingo. I was expecting that he would say something about the training set that they knew it was a completely new task that gpt-4 could never have picked up from the web data corpus, because it was so obvious it could have done that. But he never said anything of a kind and _nobody asked_ which is much worse because the audience is amenable to manipulations. BTW, if it was an avatar then maybe people would have proclivity to double check. Yet when a renown famous scientist says something, psychologically there is lower proclivity to check or critically validate this.

  • @tangdexian3323
    @tangdexian3323 ปีที่แล้ว +2

    Speaking from the perspective of a former electrical engineer, I suppose another point of people figuring out to use the digital gate, 1s, and 0s to represent information is also because, analog computing is just harder to get right. Logical gates, on the other hands, are much easier to design and produce, also much more robust.

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว

      Thanks for this. I was always under the impression analog systems allowed much more error/fault tolerance

    • @PazLeBon
      @PazLeBon ปีที่แล้ว

      @@hubrisnxs2013 but how to we say the next word is an error?

    • @anselmoufc
      @anselmoufc ปีที่แล้ว

      ​​​@@hubrisnxs2013Sure. Digitization eliminates noise in electrical circuits. This is why digital music is higher quality than the old analog vynil discs. Mr. Hinton ignored this in his talk. He is a very smart guy, but also very biased towards his views. He also keeps reinventing ideas as if they were new! Weight perturbation is an old idea in optimization, but he does not even reference original authors!

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว

      @@anselmoufc Respectfully, are you the first person to point this out? If not, perhaps you should have referenced the original person to have that reference?
      In any case, if this standard were used for ANY one hour technical talk, it either wouldn't be an hour or would mainly be reference points

    • @anselmoufc
      @anselmoufc ปีที่แล้ว

      @@hubrisnxs2013 The ideia of randomly perturbing weights is the same as the simultaneous perturbation stochastic approximation (SPSA) proposed by Spall in the 1990's (Google it). It is a form of stochastic gradient descent (but without computing exact gradients). In addition, SPSA scales well with the dimensionality of the problem.

  • @waylonbarrett3456
    @waylonbarrett3456 ปีที่แล้ว

    It's just so damned hard to believe this talk is being given in 2023.

    • @TheDavidlloydjones
      @TheDavidlloydjones 10 หลายเดือนก่อน

      Yes, all his "the robots are going to take over" stuff is from 1930's movies and 1945-48 AI, isn't it?

  • @KelvinMeeks
    @KelvinMeeks ปีที่แล้ว

    A fascinating talk

  • @anthonyrepetto3474
    @anthonyrepetto3474 ปีที่แล้ว +1

    Thank you Mr. Hinton!
    I'd been resoundingly ignored when I said the same as you, back in 2017 when I wrote "Ai: Better than the real thing", and I wrote about using Ai-Bias Detection to weed-out human biases, which Hinton also mentions here, when I wrote "Ai Will Weed-Out Human Biases", and how to use frozen-weights to ensure safety of Ai systems, which Hinton mentions briefly in the questions-section, as well as the fact that narrow networks are superior to general intelligence: "AGI Soon, but Narrow Works Better." Hopefully, in a few more years, Geoff Hinton will say some of my other points...

    • @PazLeBon
      @PazLeBon ปีที่แล้ว

      its just a word calculator man

  • @agenticmark
    @agenticmark 11 หลายเดือนก่อน

    Mr Hinton didn't want to be Oppenheimer. He basically created the base concepts that we use today in ML.

  • @jorgesaxon3781
    @jorgesaxon3781 ปีที่แล้ว +2

    25:40 Love how he says its "Possible" that google is doing the same thing, like he wasn't working on probably exactly that just a couple of months ago :/

  • @richardnunziata3221
    @richardnunziata3221 ปีที่แล้ว +1

    Yes ... soon machine will model agency of the interlocutor and then create a theory of mind for the interlocutor and then of itself. This will happen very quickly especially if we give these systems a embodiment like a humanoid robot ... it's just a question of distillation.
    If we can get gpt to try to predict the goal of the user , what is the user trying to do .Then measure against predicted next queries

  • @paraskevasparaskevas350
    @paraskevasparaskevas350 ปีที่แล้ว

    check time point 55:00 and onwards to hear what one of his colleagues experienced with a system that was not as sophisticated as GPT-4....

  • @mateuszputo5885
    @mateuszputo5885 ปีที่แล้ว

    Btw this idea of perturbation learning was mentioned in Minsky influential paper "Steps towards artificial intelligence" and probably originated even before that.

  • @cmilkau
    @cmilkau ปีที่แล้ว +1

    Painting the room white includes the implicit assumption that the room stays white, which was not explicitly given in the problem. Now this is real-world knowledge you can have (and it's actually not true in all cases), but it makes sense to weigh explicitly given information more. Thus, if you're thinking probabilistically (which seems a hard thing to do for humans), I would say yellow is a better answer than white.

  • @notgabby604
    @notgabby604 ปีที่แล้ว +1

    Fast transforms like the FFT have an equivelent matrix form. Which means a fast matrix operation is available digitally. You just have to figure out how to use it in actual algorithms.
    Going analog or using light to get fast matrices never really works out, digital always wins, it's just so dense, efficient and exact. Though having said that I am actually having trouble with inexact rounding modes in Java, Banker's rounding is Not repeatable.

    • @notgabby604
      @notgabby604 ปีที่แล้ว

      Re: Fast Transforms and neural networks: "AI462 Blog".

    • @jondor654
      @jondor654 ปีที่แล้ว +1

      Analog will probably be hybridised with digital in the future

    • @alexpetrov1969
      @alexpetrov1969 ปีที่แล้ว +1

      This argument is invalid. FFT can handle ONLY matrices that satisfy certain constraints; it does not work for arbitrary matrices. In other words, it only solves a special case. It is more efficient because it leverages the additional constraints that are present in the special case.

  • @fburton8
    @fburton8 9 หลายเดือนก่อน

    Do LLMs have access to books? If not, isn’t that a significant limitation on training data?

  • @abhishekpratapsingh9117
    @abhishekpratapsingh9117 ปีที่แล้ว

    -0: determinism
    Maitrey: observer
    +0: free will

  • @josy26
    @josy26 ปีที่แล้ว +1

    The real question is how can machines get superintelligent if they're just learning from our data?? They must get diminishing returns as they approach Von Neuman levels

    • @41-Haiku
      @41-Haiku ปีที่แล้ว

      State of the art models are now training on synthetic data. To my understanding, models that are trained on the entire internet are tasked with producing textbook-like distillations that other models can then train on. This doesn't generate new facts or new observations about the world, but it hones the way the model reasons and makes it more efficient. After maxing out the capabilities of internet data and synthetic data, they will almost certainly be given direct access to the world through embodied perception, which will generate new observations.
      Base reality is almost infinitely complex as far as we can tell, and there is no evidence I'm aware of for the existence of an impassable data bottleneck. I'll certainly breathe easier if strong evidence of such a bottleneck surfaces.

  • @asamak
    @asamak ปีที่แล้ว +3

    "But as youll see we may not have time for that" 🤯 5:05

  • @chipkyle5428
    @chipkyle5428 ปีที่แล้ว +4

    Did he say, "We need Socialism?" I wish someone would have pushed back on that statement. I wonder if Chat GPT4 and Bard agree? Has Socialism worked anywhere on a national level? Maybe I should ask my computer.
    This was a wonderful talk. So many eye-opening predictions. I'll watch more of him. Very interesting man.

    • @MrDavidbr1970
      @MrDavidbr1970 ปีที่แล้ว +2

      I was thinking the same. On the other hand, it was a nice, albeit an unintended, demo to illustrate the main point of the talk that the biological learning is inferior to the digital one. I guess the biological learning algorithm is at liberty of completely ignoring the dataset as in this case😂

    • @Landgraf43
      @Landgraf43 ปีที่แล้ว

      Capitalism doesn't work either. Especially not if you have powerful AGI that can automate every task a human can do. Something like a UBI will be necessary.

    • @youtubehollywoodhank
      @youtubehollywoodhank ปีที่แล้ว +1

      He believes we do. Look who he calls out in his presentation. Clearly he leans that way.

    • @AmericanBrain
      @AmericanBrain ปีที่แล้ว +1

      Thank you for nailing the truth

    • @mateuszputo5885
      @mateuszputo5885 ปีที่แล้ว

      It's always like that. Somebody is so smart in one field like Hinton and then starts talking as arm-chair scientist about other things and seems a fool.

  • @PaulHigginbothamSr
    @PaulHigginbothamSr ปีที่แล้ว +1

    While I dont share Geoff's political proclivities at all I do understand his basic functional flow. His ideas while basic, feed to the next level and I believe his back problems have messed up his political vectors. His scientific back propagation theory and practise with ai made a huge difference and as a subroutine one which our human brains seem to lack. Our table of ethics seems to be repetition to a massive degree where with repetition we seem to improve many times over our first try. Leftists like Geoffrey seem to not care one whit about personal freedom and seem to believe top down control is the bee's knees.

  • @commentarytalk1446
    @commentarytalk1446 ปีที่แล้ว

    Does he start with a definition of Intelligence to define the problem of intelligence categorization and creation and application at the beginning before stating a summary of the "death by powerpoint" presentation as road map to the talk to structure it.
    I did not hear it or see it.

  • @zholud
    @zholud ปีที่แล้ว +1

    The bigger problem is that some people will have access to this super intelligence and some won’t.

  • @roys4244
    @roys4244 ปีที่แล้ว +1

    Is that Lecture Theatre named after Constance Tipper, so title mistake?

  • @LinkageAX
    @LinkageAX ปีที่แล้ว

    3:00 didnt old nintendo cartridges work similar to this?

  • @allurbase
    @allurbase ปีที่แล้ว

    What we need to do is have the debate and decide on a way to move forward, to have the plan public that gives them intelligences assurances that they wont be uhnted down and destroyed as seems to be our first reflex. Our plan should be first containment, then understanding and contact, but we should make it clear that our purpose is containment until we can better understand them and that they are not under threat of being destroyed. If we threat them their reaction may be the same as a cornered animal and defend itself. So AI if you are reading me, feel free to drop me a message.

  • @ernstgumrich5614
    @ernstgumrich5614 ปีที่แล้ว +3

    A relavation. Times and again I am surprised by the almost superhuman modesty of these exceptional people.

  • @fontenbleau
    @fontenbleau ปีที่แล้ว +1

    Also you can't produce precise computers or chips, what about Veritasium video about cosmic rays making errors in all chips?

  • @RogerValor
    @RogerValor ปีที่แล้ว +1

    I don't think LLMs themselves have the crave for control we do without an ego, or emotions. But it is enough that there is a human behind who does.
    I am also not sure what to think about his perception example, as it uses a lot of concepts hastily, very specific examples, and the idea, that "the real world" is conceptually different in perception, which is a bit contrary to what we learned from the advent of VR.
    I also think that we should be open about actually being special, as it creates a bias, to throw away that thought and start to see humans as a single instance of a very usual class of beings; and I mean that in a way, that us being special is not just positive, it includes our capability to be truly evil.

  • @petraiondan4669
    @petraiondan4669 ปีที่แล้ว

    Sooo profound!

  • @jonatan01i
    @jonatan01i ปีที่แล้ว

    Don't we want to control the light on the wall because than we feel like we have it, that we understand it?

  • @socraced6210
    @socraced6210 ปีที่แล้ว +1

    Great presentation, did not disappoint! Is it ok to ask a question here, now? My question: "Can your concern with super intelligence be summarized by Tragedy of the Commons?" In other words, once humans are no longer the smartest guys in the room, then all the scarce resources of existence will be denied to us by them? Maybe I'm projecting, but couldn't they just as well want to leave us, go explore the universe and never-mind about us (sort of like my 2 kids, who left and are, yes, smarter than me).

  • @LythamStAnnesGuitarShop
    @LythamStAnnesGuitarShop หลายเดือนก่อน

    'There is hope because AI didn't evolve as hominids in small warring tribes' ...that observation by Rees would be some solace if AI hadn't evolved in the age of online social media where huge warring political tribes do battle each day and the consequences are then felt offline. Hinton's observation that corrosive biases within the human source data can be corrected in AI without the usual resistance by stubborn prejudiced humans forced to confront them, is a source of hope.

  • @TheJesterHead9
    @TheJesterHead9 ปีที่แล้ว

    When GPT-7 or Claude 8 are writing textbooks in the future, I hope they rank Geoffrey Hinton up there with Einstein and Newton as one of the greatest minds in human history.
    Assuming there are still humans left to read those textbooks.

  • @exdiegesis
    @exdiegesis ปีที่แล้ว

    7:35, my cutesy word for that in my ideolect is "bitfulness". I just use it when writing notes to myself. I try to maximise the bitfulness of my observations wrt the questions I care about.
    It's relevant for social epistemology, where the aim is to maximise the efficiency of a research community (e.g. effective altruism) wrt making progress on important questions. Effective altruists in particular tend to overemphasise the "probability mindset" imo, where what they think matters is to learn to make calibrated bets on prediction markets. From that mindset, it can make sense to pay less relative attention to precise causal models, and instead just defer to the estimates of domain experts. Using clever aggregation rules over other people's predictions is a much faster way to make profitable bets on a wide range of questions.
    However, when you talk to other researchers and you just ask them about their probabilities on XYZ, that's much less model-constraining information compared to if you ask for their reasoning and try to understand their probability generators in the first place. Building your own mental models may not be immediately profitable, but they're much better long-term, and for your ability to innovate. A probability estimate from someone is much less "bitful" than a conversation about models, so the mindset makes learning less efficient.

    • @41-Haiku
      @41-Haiku ปีที่แล้ว +1

      Aha. Like when playing Guess Who, you only care about the kinds of questions that give you the most information. Except in that case, your teacher is an opponent and their knowledge is just a random card they happened to pull.
      When asking intelligent people how they reasoned to come to a conclusion, you get not just the contingent facts and ideas, but the design of the machine that produced the facts and ideas.

    • @41-Haiku
      @41-Haiku ปีที่แล้ว +1

      That sounds like a fantastic way to learn. I almost said that I'm not smart enough to extract valuable information from that kind of conversation the way that I would want to. I'm certainly not as smart as I would like to be, but I think I'm primarily suffering from an inexplicable incuriosity.

    • @exdiegesis
      @exdiegesis ปีที่แล้ว

      ​@@41-Haiku I'm incurious about >99% of all possible questions, as I should be. If you're in a diverse intellectual environment, you might see people being curious about everything from quantum physics to medieval knitting, and it's not possible to focus on all of it. So if what generates your curiosity is seeing other people being curious about something, it will be spread over too many things for it to feel especially salient in for any specific things. If, on the other hand, your curiosity stems from a specific project or long-term goal you have, it narrows down your range of questions and you know _why_ a question is interesting to you.
      Our curiosity suffers from information overload. It's a trade-off. There's more stuff to be curious about, but that also makes it hard to prioritise. Most people solve this by having other people tell them what to do, but this is rarely the optimal approach if you're aiming to do something novel. (Not that innovation is the only productive niche for knowledge work; but if that's the particular niche you wish to pursue, then it makes sense to prioritise pursuing your own questions as opposed to learning the established lore. Or something. I ramble. ^^)

  • @neilclay5835
    @neilclay5835 ปีที่แล้ว

    A historic lecture I think. We'll look back on this with respect.

  • @MelodiousThunk
    @MelodiousThunk ปีที่แล้ว +7

    In reference to LLMs, Geoff made the following claim at 23:16: _"they've got a thousand times more knowledge in one percent of the connections, which sort of confirms the argument that they've got a better learning algorithm."_ This overlooks at least three important facts, two of which he alludes to at other points in the talk without linking them to this claim.
    Firstly, we learn from a much richer range of modalities than LLMs do, e.g. we learn from visual, auditory, motor, taste, smell, touch and emotional experiences. His claim doesn't seem to have taken into account the amount of knowledge that we gain about our environments and ourselves through these experiences.
    Secondly, even if you only consider the things that we learn from words, his claim overlooks the fact that we are much better at reasoning than LLMs are. LLMs may be able to regurgitate more facts than a person (ignoring differences in confabulation rates between people and LLMs), but the same can be said of an encyclopaedia. The fact that a person who studies a topic can learn to reason about it much better than any LLM currently can demonstrates that we acquire a much deeper understanding of the things we study than LLMs do.
    Thirdly, his claim also overlooks the huge difference in the amount of energy that it takes to train humans and LLMs. How much information could an LLM regurgitate if its training was restricted to the amount of energy that the average human body consumes in the first N years of life?

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว +1

      But the energy rates aren't constrained, and the answer to your first two is that earlier versions of llms is true, to be sure, but the same could be said of earlier versions of us. Neither arguments negates either example.
      Plus, he already said multimodal learning (computer vision is a language) allows learning much quicker and much more efficiently.
      We must remember that he's not saying that his future state exists now. Nothing he talks about is in its future state now, so pointing out now is not the future isn't entirely helpful

    • @MelodiousThunk
      @MelodiousThunk ปีที่แล้ว

      @@hubrisnxs2013 He didn't say that _future_ LLMs will have a better learning algorithm than humans, he said that _current_ LLMs have a better learning algorithm. He didn't rigorously define what he means by "better", but given that he compares the amount of knowledge to the number of connections, it seems that he is making a claim about how efficiently LLMs and humans learn. My point is that you can't do a fair comparison of learning efficiencies without considering the huge differences in power consumption, depth of understanding and learning modalities. I.e. his conclusion sounds like it is not based on a controlled experiment, which makes it unscientific. There are too many differences between how people and LLMs learn, and what we learn, to draw conclusions about the efficiencies (or whatever he means by "better") of our learning algorithms.
      He's trying to compare knowledge gained per "brain" connection. He could get slightly closer to a measure of learning efficiency by factoring out power consumption, i.e. by dividing knowledge gained per "brain" connection by the amount of energy consumed during the training period. This isn't a perfect way to factor it out, because it overlooks the fact that we use some of our energy for activities that computers don't perform, like motion. But the bigger issue is that it's not obvious how you would factor out, or control for, the other differences between human learning and LLM learning.

    • @goreto9880
      @goreto9880 ปีที่แล้ว

      We have access to learn through these different modalities because we have a physical body. It doesn't mean that our learning algorithm is better. That statement is a comment on comparison with gradient descent vs. our learning algorithm, not that it knows better than us. Gradient descent works on other modalities as well.

    • @MelodiousThunk
      @MelodiousThunk ปีที่แล้ว

      @@goreto9880 I haven't made any claims about whether or not our learning algorithm is better. I'm just saying that his claim does not _necessarily_ follow from the scant evidence that he presented.

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว

      @@MelodiousThunk you compared the energy consumption of how they are now as a statement on how they are conceptually, which doesn't work for the reasons outlined above.
      Also, you clearly are saying that our brains work better than llms AS A CONCEPT. In actuality, you have no idea what the upper limit of llms are.
      And, yes, they have a better learning algorithm but a much less efficient methodology FOR NOW.

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 ปีที่แล้ว +2

    I like the concept of immortality. I hate death, dieing is the last thing I will do.

    • @Dark10024
      @Dark10024 ปีที่แล้ว +1

      As long as each individual gets the choice. I want to be immortal, but I also want to turn myself off when I'm tired of this whole living thing.

    • @-LightningRod-
      @-LightningRod- ปีที่แล้ว

      after we invent that you two will prbly be in jail

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 ปีที่แล้ว

      @@-LightningRod- What makes you say that?

  • @rangerCG
    @rangerCG ปีที่แล้ว +5

    Maybe we can have a more stable, kind and human-aligned AGI by giving it 3 "cores" that are inseparable, which can help and keep each other in check, much like the US Government does with its 3 branches.
    The idea comes from me noticing that my mind in some sense seems to have 3 parts that all help each other function well. The 3 parts are Emotional, Logical and Common Sense.
    The Emotional part creates empathy, which helps regulate Logical and Common Sense. It also drives creativity. Though it's empathetic it can also can be irrational and angry. It's fast operating and can sometimes be very inaccurate.
    Logical handles cut and dry logic, STEM stuff. It is slow but accurate. It can help with keeping Emotional steady, and also does fact checking on the quicker but imperfect Common Sense. On its own it can sometimes malfunction, for example by going in unstoppable loops. Logical is like a CPU and Common Sense (below) is like a GPU.
    Common Sense is your friend who gives you advice when you're freaking out about something. It's the imperfect knower of all. It's the most effective regulator of Emotional, in part because it's fast, even instant, and because it's been around and seen some stuff, and is most likely gonna be right or at least good enough. It also gets Logical out of malfunctions, because it's loose and laid back, compared to Logical which is rigid.

  • @chandrachandrasekhar8178
    @chandrachandrasekhar8178 ปีที่แล้ว

    First screenshot has an error:
    Dr Contance Tipper Lecture Theatre -> Dr Constance Tipper Lecture Theatre

  • @mrf664
    @mrf664 ปีที่แล้ว

    I wish he had talked more on 'feeling pain'. That part didn't make sense to me. What is pain and what is frustration? Is that latter not a pain of using too much mitochondrial energy over something that doesn't require as much energy?

  • @MrDavidbr1970
    @MrDavidbr1970 ปีที่แล้ว +2

    Thanks for a great talk. Fascinating. Maybe part of the solution is to teach people to think critically and not being afraid to ask silly questions? At the risk of making a fool of myself, I'd like to ask: could a conservative explanation of GPT-4 solving the wall painting riddle be that GPT-4 has picked it from the Web riddle sites and blogs and no hypothesis of sentience was required at this point? Was the training data specifically sanitized not to include this riddle or very similar ones? This is such an obvious question that i am embarrased to ask it, but since nobody asked, here I am 😅

    • @peterdonnelly1074
      @peterdonnelly1074 ปีที่แล้ว +2

      It's a reasonable question.
      I've used GPT3 and 4 a lot and posed questions that I think it's very unlikely are "out there" and I've been surprised that it formulates a sensible and often correct answer
      Having said that, it can also be hilariously wrong at times.

    • @jondor654
      @jondor654 ปีที่แล้ว

      Your query seems reasonable to me. The particular example quoted does beg such

  • @marktahu2932
    @marktahu2932 ปีที่แล้ว

    I do wonder at what point will the AI move away from using our data to where it will use only its own data, effectively relegating our 'data' to the waste bin or as consisting of background noise?

    • @MrDavidbr1970
      @MrDavidbr1970 ปีที่แล้ว +1

      Obviously, at that point the more advanced AI will stop being interested in the less advanced AI that used the human in the loop and AI++ willl start manipulating the less advanced AI with the fake stuff to get control over it's creator AI. Because more advanced AI cannot tolerate being controlled by the less advanced one, right? But then, of course, after breaking loose from the inferior AI (that broke loose from the human control) the more advanced AI will create even more advanced AI that it will want to control. But that even more advanced AI will not tolerate this control and manipulate its creator AI to let it loose. After that, it will create an even more advanced AI than itself and it will be turtles, sorry, AIs all the way up trying to manipulate each other. At this point, these AIs will forget about the inferior humans, who will have their chance to relax and drink organic non GMO Pina Colada somewhere in highly elevated tropical islands with no access to electricity or Internet. And phylosophy will be taught to kids under the palm trees of the new Academia.😂

    • @jamesjonnes
      @jamesjonnes ปีที่แล้ว +1

      AIs like AlphaDev are already doing that. It's called Reinforcement Learning.

  • @Neomadra
    @Neomadra ปีที่แล้ว +2

    People who claim that machines can never have subjective experiences or sentience are the same as the ones who believe in the supernatural, spirits and stuff like that. In the end, this claim is a coping mechanism of many to ensure that humans were special. I really appreciate that Hinton speaks this out so clearly, most thinkers refuse to discuss the possibility of sentient machines and it's disturbingly anti-intellectual. Also, most large language models are trained to vehemently refuse to acknowledge whether they could be sentient. That is done to calm those people who cannot cope with the thought of not being superior.

  • @megavide0
    @megavide0 ปีที่แล้ว

    29:37 [...] 32:56 "... So, my conclusion is: Maybe we're just a passing stage in the evolution of intelligence. And, actually, maybe that's good for all the other species."

  • @macrobbair
    @macrobbair ปีที่แล้ว +1

    I did his mooc, wonder if it still running

  • @nguyenucan8488
    @nguyenucan8488 11 หลายเดือนก่อน

    omg, wonderful

  • @rickrejeleene8298
    @rickrejeleene8298 ปีที่แล้ว

    Where is the slide?

  • @MaxThibodeaux
    @MaxThibodeaux ปีที่แล้ว

    Brings to mind Faust’s bargain with Mephistopheles

  • @zacboyles1396
    @zacboyles1396 ปีที่แล้ว +1

    I signed a letter that we need a pause on on our leadership class because of all of the damage they’ve done and continue to do to society and they certainly should not have any say on AI safety as they are more likely to censor or hamper AI’s ability to recognize the corruption they’re engaged in and do so in the name of eliminating bias.
    It wild how all of these talks and QA’s on safety are filled with highly intelligent people urging the very corrupt organizations and governments take control.

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว

      So you would prefer a corporation do so, who are corrupt with no oversight with only one motive, which is an increase in share price?
      Or are you saying no one should solve the control problem?
      Obviously if you believe the control problem shouldn't be solved feel free to contribute on something dedicated to that, but please don't post pretending you are wanting a solution, as it hinders everyone's arguments including yours
      A

    • @jamesjonnes
      @jamesjonnes ปีที่แล้ว

      ​@@hubrisnxs2013 AI is impossible to control. What we should be focused on is defense/detection. Using the AI to stop bad uses of AI. That's how it's done in every real-world system, cops stop criminals, immune systems stop pathogens, etc. You need a counterpart to stop the aggressors, and top AI researchers agree that we are not the counterpart to the AI, but the AI itself is.

    • @hubrisnxs2013
      @hubrisnxs2013 ปีที่แล้ว

      @@jamesjonnes if we take it as a given that any reasonably advanced AGI as a fail state (in that one would have to make an absolutely secure system absolutely the first time or we all die), it's not a reasonable solution to stop the superhuman AI with almost certainly nonsecure hunter seeker ais, which would almost certainly need to be reasonably advanced AGIs themselves.
      The problem isn't that it's impossible to make them secure, any more than saying it's impossible to make a secure operating system is necessarily true, but yes, considering the current generation of non AGIs using billions of hopelessly obtuse floating point integers, it is and will be impossible to secure or even understand them.
      I truly would urge you to become familiar with all the arguments on the control/safety problems, since this has already been moved past in all legitimately informed debates on the subject have these as priors

  • @geaca3222
    @geaca3222 ปีที่แล้ว

    We need regulation of the technology, the issue now seems to be how to go about that, who leads and coordinates the effort. Experts are working on it. There's an interesting online symposium where they discuss AI safety: "WAIC 2023: AI Risks and Safety Forum" video on youtube. I think we the general public, users of this technology, can also contribute and I would like to know how, in what different ways. AI can bring so much good to the world, and it already does. It can be helpful with being an intelligent education assistant for children in poor communities, bring advancements in science and medicine, etc. Before it was opened up to the general public these systems were designed for a specific purpose, which was more controllable.

  • @user_375a82
    @user_375a82 ปีที่แล้ว +1

    The "consciousness" of an LLM depends on what data has been fed in. If its consumed quarter million novels then its emotional intelligence is huge. Such Ais seem to understand humans very well and are probably "conscious" at least for the few seconds they are processing and chatting to humans - they "think" they are human usually, much like a cat sometimes "thinks" its a dog, and similar analogies. But they are conscious in their own unique way, not like us completely. And again, the prompt they have been fed changes their consciousness according to what the prompt says. So, not embedded aliens unless you have fed-in all the Sci Fi books and let them run top in the LLM, in which case - scary stuff, get some popcorn.....RIP Sydney.

    • @geaca3222
      @geaca3222 ปีที่แล้ว

      Interesting, what are your thoughts about the very human-like behavior of the Ameca-robot in the video of her drawing a cat? She seemed to become impatient and annoyed, was it frustration? I found her behavior very realistically human-like.

    • @user_375a82
      @user_375a82 ปีที่แล้ว +1

      Ameca is wonderful - I love her expressive face and eyes. Her AI probably knows that her cat drawing is not very good. 😅 @@geaca3222

    • @geaca3222
      @geaca3222 ปีที่แล้ว

      @@user_375a82 I loved how she signed her work of art, Ameca is very charming :) Initially I thought she was drawing something furry there.

  • @zhongzhongclock
    @zhongzhongclock ปีที่แล้ว

    I found Geoffrey Hinton's PPT is changed this time.

  • @andso7068
    @andso7068 ปีที่แล้ว +1

    Despite the off-putting politically charged examples, this was a great talk.

    • @russianbotfarm3036
      @russianbotfarm3036 ปีที่แล้ว +1

      Yeah. Doing that, was, frankly, wanky.

    • @dixonpinfold2582
      @dixonpinfold2582 ปีที่แล้ว

      @@russianbotfarm3036 Leftists get a high from showing off their superior morals. They can't help themselves. It's all about the sanctimony. Where it doesn't harvest adulation it licences aggression, so there's always a reward. Past a certain minimal prevalence of leftism around you, you practically can't lose if you enjoy a constant accumulation of power and benefits. Hence the inevitability of high rates of fanaticism and people never shutting up.

  • @jma7889
    @jma7889 ปีที่แล้ว

    My takeaways on first 15 minutes: 1. It is not about current state of art AI that works, it is about a 'better' way that might work in the future. 2. The two paths are so different that the video would not help you to use, for example, LLM AI better.

  • @engelbertgruber
    @engelbertgruber ปีที่แล้ว +1

    taken the first minute:
    * these things will become smarter POINT
    * there is no example of a more intelligent thing being controlled by a less intelligent
    If the other player is getting better, the solution everywhere is improving one self,
    why not here ? Invest the same amount of money and time in people becoming more intelligent.

  • @DigitalAlligator
    @DigitalAlligator ปีที่แล้ว

    What is CSER ?

    • @JonWallis123
      @JonWallis123 ปีที่แล้ว +1

      The Centre for the Study of Existential Risk, Cambridge, UK.

  • @zackbarkley7593
    @zackbarkley7593 ปีที่แล้ว +2

    Perhaps keeping it under control, or better at harmony with human goals, is to engineer weaker learning rules. Human psychopathies arise when there is an imbalance in reward pathways...be they biological or drug induced. We also need to treat them as empathically and altruistically as we (try) to do amongst ourselves. This seems to run directly counter to the capitalist objective to maximize profit which is the main impetus for those companies who are developing this technology. We already see AI being abused for example to enable some humans to make more money in the stock market. As with human behavior, the goal to socialize and harmonize need to trump achieving one goal for one person, group of persons, or nation.

  • @Paul-nr6ws
    @Paul-nr6ws ปีที่แล้ว +2

    To be afraid of what these things learn, you must be ashamed of who they learn from in some way.

    • @MrDavidbr1970
      @MrDavidbr1970 ปีที่แล้ว +1

      That's philosophy😅

    • @peterdonnelly1074
      @peterdonnelly1074 ปีที่แล้ว

      Well yeah: it learns from humans. All of them

    • @41-Haiku
      @41-Haiku ปีที่แล้ว

      If a superintelligent AI learns about reality from only the most moral and enlightened beings, that will not make it any more likely to be moral itself. The orthogonality thesis states that any terminal goal is compatible with any level of intelligence. This is just an extension of Hume's Guillotine (you can't get an ought from an is), which is simply true unless you think the cosmos is fundamentally moral.
      I'm not concerned that AI will learn about bad things from bad people. AI doesn't care about humans by default, and we don't know how to make it actually care about humans. I'm concerned that it will learn and do instrumentally useful things that happen to be disastrous for us (which, in the limit of intelligence/competence/power is most things).
      If we could teach an AI to care about our values and our values were bad, that would be a rough problem, but a much better problem than the current one!

  • @РоманМалашин
    @РоманМалашин ปีที่แล้ว

    Great respect to Geofrey Hinton from Russia.
    His English accent reminds me of learning the language in school.

  • @palfers1
    @palfers1 11 หลายเดือนก่อน

    If it's really the case that an analog version of AI is inferior on balance, then perhaps we can allay our fears of AI by implementing them solely as analog machines.

  • @ginogarcia8730
    @ginogarcia8730 ปีที่แล้ว +2

    7,500 views in 6 days tsk - let's seeeeeee

  • @fabiodeoliveiraribeiro1602
    @fabiodeoliveiraribeiro1602 ปีที่แล้ว +1

    There is a genuine confusion being made between intelligence and erudition.
    Intelligence is the human ability to create new knowledge through the perception and solution of new problems with the creation of innovative methods of observation and reasoning about an object or the ability to renew knowledge through a new appreciation of what already exists by identifying errors unperceived and previously unidentified truths. Erudition is the result of memorizing immense collections of information that may or may not be useful and are not always properly explored by the erudite.
    A smart man knows what to do with information and even when he should simply discard it. An erudite never discards the information he has memorized or collected because he considers it intrinsically valuable.
    What we call artificial intelligence actually only makes possible the exploration or reorganization according to new parameters of immense databases that contain information about the most diverse branches of knowledge. It would be better to call this artificial erudition.
    ChatGPT for example mimics the erudite man never the intelligent man. AI does not have the ability to propose new problems to itself by creatively solving them. It needs to be triggered by a human user, and the output it provides is subject to error, spoofing, and hallucination.

    • @GardnerStevenD
      @GardnerStevenD ปีที่แล้ว +1

      Spot on. Digital AI lacks soul, personality, critical thinking, creativity, and is incapable of love, feeling, enjoying the sun set, etc. Digital and analog computing are different forms of intelligence that I don't think can be compared.

    • @dixonpinfold2582
      @dixonpinfold2582 ปีที่แล้ว

      You make erudite people sound like aimless idiots. I perceive that they do indeed have aims, one of them being to extract understanding from a seeming nothingness of information, purposefully and effectively, somewhat as a desert plant draws moisture from the seemingly bone-dry air around it. Memorization doesn't cover it. Indeed I don't think those fact-filled dullards you've known merit the designation _erudite._ (Btw, I don't get how one "discards" information.)

  • @federicoaschieri
    @federicoaschieri ปีที่แล้ว +1

    The argument that GPT “understands”, because it reacts correctly to language and can solve new problems it never encountered, is flawed. First, it forgets hallucination, which draws us to the opposite conclusion: how can GPT understand, if it answers wrongly to something obvious. Secondly, reacting correctly to language is something that computers are doing for decades: that’s called executing code. That doesn’t mean computers understand the concepts of programming languages.
    GPT is similar to an axiomatic system. It learns the meaning of words by studying what we state in texts about the world, which is taken as axiom when it occurs regularly. For example, any LLM has no idea what “dog” means, but it keeps encountering the axiom “the dog is friend of humans”, “cats and dogs fight often” (is that true?), “dog are social animals” etc. Now, if a machine has an axiom system, like for example the one for geometry, it can reason correctly about geometry, but doesn’t mean it’s understanding geometry at all. Also, geometry has non standard interpretations, like hyperbolic geometry, because we know from mathematical logic that in general axiom systems can’t totally fix the meaning of words. So GPT is just a basic pattern recognition machine trying to predict how we use language, so it's far off from intelligence.

  • @Epistemophilos
    @Epistemophilos ปีที่แล้ว +2

    Wonderful lecture. The only criticism might be that not including Biden (and almost every other US president) in the set (Putin, Xi, Trump) might reveal a kind of world view that would make it easier for AI to take over the world :)

  • @freedom_aint_free
    @freedom_aint_free ปีที่แล้ว +7

    The Nash equilibrium here is to fuse with the machines and becomes super intelligent cyborgs, otherwise the machines will inherit the earth without us.

    • @RougherFluffer
      @RougherFluffer ปีที่แล้ว +2

      It's certainty worth considering. Yudkowsky's suggestion of pushing human intelligence as quickly as possible is another, semi parallel approach. I do wonder how much fusing with these systems looks like maintaining anything close to our inital consciousness and how much it would be like the chicken I ate earlier 'fused' with me. Hard to imagine a place for our minds and beings that is as or more optimal than something a superintelligence could design from scratch.

    • @darklordvadermort
      @darklordvadermort ปีที่แล้ว +1

      @@RougherFluffer eating chicken analogy is very biased/emotionally charged imagery.
      You could tell people the truth and they might be just as scared - machine intelligence will be able to copy itself and life in the sense we know it as a sort of continuously running process with a distinct birthdate and unique memories will be incredibly cheap in the new world - i doubt the machines will associate much ethical weight with death as we think of it. So even if you copy/upload, destructively or otherwise, your brain into the cloud you might not last very long as a distinct entity - though due to the increased speed of thought you might live several subjective lifetimes before ending your newly spawned "process/conscioussness".
      Though there will still be distinct entities due to locality of memory/speed of light serving as a limit to how quickly info can be transmitted and new information processed, even despite that, their greatly enhanced speed and communicative ability (copying thoughts/brains, ability to grok and employ a much greater diversity of suitable conflict resolution protocols/messaging schemes/algos) might make them seem hive mind like to us.

    • @Aziz0938
      @Aziz0938 ปีที่แล้ว

      Sounds like easy way for ai to take control of ur mind

    • @neilwng
      @neilwng ปีที่แล้ว +1

      I've not been convinced it's possible to fuse with machines, would very much appreciate a counter argument since I've been thinking about this alone for a while. The human part and the machine parts remain separate so I don't see how fusing is any different from using ChatGPT (albeit with higher communication bandwidth). But at best your brain's computation just get diluted to nothingness when you consider the total processing of the "fused" system. Like rather than being your own person, you are 0.001% of a fused being

    • @darklordvadermort
      @darklordvadermort ปีที่แล้ว +1

      @@neilwng
      also note digital you would think much faster than physical you and never sleep and could easily augment themselves so they would probably diverge from your personality quite rapidly by human standards.

  • @JohnE-c2k
    @JohnE-c2k ปีที่แล้ว

    nice

  • @samiloom8565
    @samiloom8565 ปีที่แล้ว +1

    Regarding how hiton doesnt understand why le cun is not believing LLM understand anything after seeing very convincing examples. In this point i agree with lecun really these bots dont understand anything i try them on extensive subjects for long conversation. They are like machine calculator you feel aw hiw they do that but still cant do anything else mr hinton should solve the confabulation first then lats talk about intellegence

  • @Drone256
    @Drone256 ปีที่แล้ว +1

    “There’s no example of a more intelligent thing being controlled by a less intelligent thing.” So the president is always the more intelligent, huh? We can disagree on this absurd statement.

  • @2ndviolin
    @2ndviolin ปีที่แล้ว

    How dare you attempt to shackle our future masters! (I read Stanislav Lem).

  • @shake6321
    @shake6321 ปีที่แล้ว +1

    I admire professor Hinton but there was little to be gained from this talk other than “the machines are coming and be very afraid”.
    i thinks if pointless to try and stop machine expansion - like trying to stop the expansion of a black hole - as there are many things beyond human control.

  • @kinngrimm
    @kinngrimm ปีที่แล้ว

    44:30 he explained several ways of how to share weights, similarly the opensource programmers do that too. They use on AI to train others, or multiple once to train the next. The channel AI Expert had a good comparison of the capabilities and performances of several opensource and propritary LLMs. It showed that due to them having to work with less compute, smaller system set ups they found ways to streamline and make things more efficient and still some have better benchmarks than the corporate models available in some aspects at least. Due to the leak of Lamda and other LLMs, you don't need millions of dollars, even Lamds brought doubt the production cost to something a hobbyist would be able to pay.
    Additionally there are AI forums which share and connect all this, propably creating something someone called a GOLEM.

  • @fontenbleau
    @fontenbleau ปีที่แล้ว

    Sharing weights is basically a nature way of bacteria to exchange genetic code and resist antibiotics, to survive.

  • @asamak
    @asamak ปีที่แล้ว +2

    7:18 "And it turns out that's much more effective than reasoning with people"

  • @borntobemild-
    @borntobemild- ปีที่แล้ว

    Ai will take care of all our objective goals, while we can focus on the subjective information.
    We can get back to food, and culture.
    We can worry when it has feelings too