Artificial Intelligence & Large Language Models: Steve Hsu Oxford Lecture - #35

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ย. 2024

ความคิดเห็น • 12

  • @blueblimp
    @blueblimp ปีที่แล้ว

    The Jung-Freud problem (which you mentioned before in another podcast) is very interesting! The trouble is, not only can you not trust the LLM, but you can't even trust the pre-training data, at least not with the way that pre-training is conventionally done today.
    A similar issue could arise in fictional, hypothetical, and historical scenarios. For example, it's good for ChatGPT to think the world is round, but in a fictional setting with a flat world, that'd be undesirable. So this is a very generally applicable problem.

    • @sachetanbhat
      @sachetanbhat ปีที่แล้ว +1

      Can you pls point me towards the podcast you mentioned in your comment?

    • @blueblimp
      @blueblimp ปีที่แล้ว

      @@sachetanbhat I think it was Danny In The Valley's 2023-04-20 episode, interviewing Hsu.

  • @k14pc
    @k14pc ปีที่แล้ว

    extremely interesting.
    one note on the ending: it is possible that these LLM derived minds will not be conscious. eg David Chalmers guesses that there is a 10% probability that current models are conscious and a 20% probability that future models will be conscious. Ofc no one really knows, but the prospect of getting replaced by completely unconscious beings seems unappealing to say the least...

  • @666LatinoCad
    @666LatinoCad ปีที่แล้ว

    Pretty interesting and provocative transhumanist perspective towards the end

    • @xsuploader
      @xsuploader ปีที่แล้ว

      I wouldnt call the idea that there are no human brains in 1 million years provocative. Youd have to have been stupid to believe otherwise given the rate of advancement.

  • @ajohny8954
    @ajohny8954 ปีที่แล้ว +3

    That ending for me is very pessimistic, but I’m guessing Steve would describe it as optimistic. Regardless, it seems more or less inevitable

  • @toki_doki
    @toki_doki ปีที่แล้ว +1

    You said it has implications at the level of the universe that so soon after our ape brain evolving we can build AGI. What did you mean? I presume you mean that biological advanced intelligence is probably rare and transient and that the universe should be filled instead with AI. It so where are they?

    • @StephenHsu
      @StephenHsu ปีที่แล้ว +3

      Taken further, it suggests we might live in a simulation. If you can create AGIs you can create virtual worlds filled with agents who may not know they are AIs... The timescale during which most sentient beings are evolved biological creatures (versus mostly AIs, perhaps living in virtual worlds) seems very small (transient) compared to the age of the universe.

  • @fog3911
    @fog3911 ปีที่แล้ว +1

    👍👍👍

  • @labanyu
    @labanyu ปีที่แล้ว

    Well that was pessimistic