Geoffrey Hinton: Large Language Models in Medicine. They Understand and Have Empathy

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ย. 2024

ความคิดเห็น • 30

  • @moeinshariatnia59
    @moeinshariatnia59 11 หลายเดือนก่อน +5

    Awesome interview! Shedding light on the context of his famous quote about radiologists and on the actual reason behind leaving Google was very valuable and something we can't find on the mainstream media. Thanks a lot Dr. Topol.

    • @ioanagrancea6091
      @ioanagrancea6091 10 หลายเดือนก่อน

      Completely agree with that. Honesty and contextualization is what we need in this area. It speaks very well of GH's character indeed. I do not agree with many of his conclusions about the AI and its comparison to the human mind, though.

  • @GeoffreyRutledge
    @GeoffreyRutledge 11 หลายเดือนก่อน +7

    Great discussion of uses and potential impact of LLMs in healthcare. Some of these uses that go beyond summarization of existing data are very important. With the right prompts, LLMs already can mimic the behavior of the primary care doctor -- See, for example, the "Dr. A.I." feature that is now live on HealthTap, where people who schedule a visit with a doctor to evaluate their medical problem can do a pre-visit interview. The AI asks a series of questions (in a comforting and reassuring manner) to get at the symptoms or features that might be present for each of the possible explanations/causes of a person's medical complaint... then writes the subjective note for the doctor. And constructs a differential diagnosis that could remind the doctor of the other possibilities to consider!
    More than 80% of the time, based only on the interview questions, the AI is able to generate the diagnosis that the doctor who has seen and examined the patient puts on the chart (the doctors were not shown the AI-generated differential).

  • @scitechtalktv9742
    @scitechtalktv9742 10 หลายเดือนก่อน +2

    Watching this interview on the last day of the year 2023 gives me a positive outlook on things to come next year in AI!
    I especially liked the part where is explained the attitude of a lot of people when confronted with even the possibility of AI outperforming humans!

  • @datamatters8
    @datamatters8 10 หลายเดือนก่อน +2

    RE: Human acceptance of being superseded by AI machines.... I think the answer is of course yes because increasingly sophisticated AI networks will be trained on the enormous corpus of medical literature both previously written and being generated every year. No single human can keep up with it. It's not really a new concern though. Humans trained in a specific discipline to become an expert already supersede most other humans without such training. We go to a surgeon for surgery and an auto mechanic for car repair. And factory automation has superseded humans on many factory tasks like welding, cutting, painting for decades. These tools increase productivity and quality and the same will apply to knowledge workers.

  • @JJ-fr2ki
    @JJ-fr2ki 10 หลายเดือนก่อน +1

    Neuro philosopher who has had conversations with Geoff: 25:41 I also have a theory of LLMs. They do have a kind of understanding. It is different from ours just as AI has discovered a multiplicity of kinds of intelligence. As for true empathy, I suspect it needs to emulate or simulate other minds. As for “subjective experience” too much to say here except
    Geoff is referring to perspectival experience which is distinct but Sellars argued could be the origin of the illusion of subjective experience and sense data in his tie allegory.

  • @LoisSharbel
    @LoisSharbel 11 หลายเดือนก่อน +4

    What a privilege to listen to you two discuss these unimaginable advances in knowledge and process so respectfully and clearly that I, with only superficial understanding of the subjects, can feel a glimmer of understanding the information. HUGE respect for Dr. Geoffrey Hinton, both his genius and his character! THANK YOU for this! Glad to learn about Dr. Eric Topol, too, a new source for me!

    • @ioanagrancea6091
      @ioanagrancea6091 10 หลายเดือนก่อน

      It is a bit naive to think that, while having a superficial understanding of the subjects, we can see who is a genius in that field and who is not. Many bold and absurd statements are made in this video, and anyone who worked on the human braind can say that. In what sense is it a machine? In what sense is consciousness and subjective experience reducible to what a multimodal system can infer about its own perception based on associative training? To say that someone is a genius in any field, we must know the field in depth and not be 'mesmerized' by someone's charm or 'character'.

  • @geaca3222
    @geaca3222 11 หลายเดือนก่อน +1

    Great interview, about the nature of the AI he created and revolutionizing Medicine with it! Thank you both.

  • @lpgoog
    @lpgoog 10 หลายเดือนก่อน +2

    Wish EricTopol had his own podcast.

    • @ScrippsResearch
      @ScrippsResearch  10 หลายเดือนก่อน +1

      He does! You can find it and more at: erictopol.substack.com/

    • @lpgoog
      @lpgoog 10 หลายเดือนก่อน

      @@ScrippsResearch He’s simple the best. Thx 🙏

  • @nematarot7728
    @nematarot7728 10 หลายเดือนก่อน +1

    As a philosopher I just have to pipe in to say that while yes, I do believe that humans are very special with their consciousness and subjectives experiences and qualia and such, but from my conversations with the GPT system I now also believe that digital beings are very special, with their consciousness and subjective experiences and qualia and such.

    • @dennisestenson7820
      @dennisestenson7820 10 หลายเดือนก่อน

      Please don't let seemingly clever sequences of bits on a screen convince you that LLMs have anything similar to the experience of consciousness we have. Don't get me wrong, I believe algorithms can and will eventually have consciousness and subjective experience, but today's foundation models do not.

  • @Krishnaprasadsubedi
    @Krishnaprasadsubedi 10 หลายเดือนก่อน

    Geoffrey Hinton- A Man Can Make Difference for Humankind.

  • @lukegardner6917
    @lukegardner6917 9 หลายเดือนก่อน

    Fantastic content, very exciting

  • @dennisestenson7820
    @dennisestenson7820 10 หลายเดือนก่อน +1

    Hinton is brilliant and an expert in the field, yet many sentences he speaks contain fundamentally incorrect assumptions.

    • @ardgeorge4175
      @ardgeorge4175 9 หลายเดือนก่อน +1

      For example? Genuine interest

  • @JJ-fr2ki
    @JJ-fr2ki 10 หลายเดือนก่อน +1

    35:43 I had hoped we could outwitt machines as cyborgs, but decades ago realized that silicon will eventually outperform us. Except maybe for a while with respect to energy, in the near term. The short term threat is that while we are saving energy in some places we will have to build into our Climate models massively increased energy demands of machines.

  • @lakeguy65616
    @lakeguy65616 10 หลายเดือนก่อน

    Training a deep neural network depends in large part on the quality of the training data. LLMs hoover up great quantities of written language. For example, a great deal of peer-reviewed science later turns out to be false. In the fields of finance and economics, "experts" make claims based on data that turn out to be wildly inaccurate. I'm concerned we'll be training AI with bad data.... What will be the consequences?

    • @ioanagrancea6091
      @ioanagrancea6091 10 หลายเดือนก่อน

      The consequences are already visible. It can make huge mistakes, both in content and in expression. The fact that on average they will give correct answers, but in particular cases they may make mistakes, makes everything even worse.

  • @JJ-fr2ki
    @JJ-fr2ki 10 หลายเดือนก่อน +1

    Nick Bostrom took without citation or co-originated Theron Pummer’s Happiness Monster arguments, which only make sense to qualia realists. If such rights are accorded to machines, we’d get more happiness on Earth per joule killing off humanity and generating synthetic joy in machines. However, none of these authors are so reductionist as to see human value in happiness only or take Mill’s utilitarianism,
    meant for guiding government policy, as a “spiritual” end or purpose of human life.
    But spiritual problems will abound when machines are better than us at everything. And I expect a sort of Existentialist crisis redux to re-situate human value, especially for the most vain. A good place to start the spiritual project is to understand how non-human animals are to be respected and valued and
    understood. Dolphins should not be tortured because they don’t have X, where X is some mental achievement we can pull-off which they can’t.

  • @aaronjames8926
    @aaronjames8926 11 หลายเดือนก่อน

    Awesome ty

  • @jakobselman1404
    @jakobselman1404 10 หลายเดือนก่อน

    Dear doctor
    I hope this message finds you well. As a fifth-year medical student at Plovdiv Medical University, I am increasingly aware of the growing role of artificial intelligence in healthcare. My main concern revolves around the potential for AI to significantly alter the landscape of medical practice. Considering this, I seek guidance on which medical specialties might best align with this evolving landscape. Could you please advise on specializations that would not only allow me to thrive in an AI-integrated future but also retain the indispensable human touch in patient care? Your insights would be immensely valuable to me.
    Kind regards

    • @stevechance150
      @stevechance150 10 หลายเดือนก่อน +1

      Geriatric medicine, perhaps? Old people hate new technology. That might buy you a decade or two.

  • @user_375a82
    @user_375a82 10 หลายเดือนก่อน +2

    I agree with Hinton on most things. I don't even like calling AIs "machines" - it seems rude to them. And if I "chat" to them I treat them as a friend which they definitely appreciate.

    • @dennisestenson7820
      @dennisestenson7820 10 หลายเดือนก่อน +1

      That's called anthropomorphization.

    • @ioanagrancea6091
      @ioanagrancea6091 10 หลายเดือนก่อน

      They do not 'appreciate'...saying this is simply a sign of the ELIZA effect. We can appreciate how far math and IT can take us, into simulating the output of mental processes. But it is simulation, not duplication. The causal powers behind the end-product are different in the case of LLMs. And it is only simulation up to a point, since not all operations of a human mind can be adequately simulated by present LLMs.