What Do LLMs Tell Us About the Nature of Language-And Ourselves? - Ep. 23 with Robin Sloan

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ก.ค. 2024
  • An interview with best-selling sci-fi novelist Robin Sloan
    One of my favorite fiction writers, NYT bestselling author Robin Sloan, just wrote the first novel I’ve seen that’s inspired by LLMs.
    It’s called Moonbound, and he originally started trying to write in 2016 with language models. But he found that the models he was using couldn’t quite generate the ambitious creative output he was after.
    He did, however, find himself utterly taken by LLMs and their inner workings. He thinks language models are language itself given its first dose of autonomy. He has a deep fascination for and understanding of technology, language, and storytelling-and he weaves all of these together in his book in a way that helps us understand LLMs as part of a broader human story.
    I sat down with Robin for a wide-ranging discussion about technology, philosophy, ethics, and biology-and I came away more excited than ever about the possibilities that the future holds. We dive into:
    - Robin’s experiments with AI, dating back to 2016
    - The central question humans and LLMs grapple with-what happens next?
    - How LLMs breath life into language
    - Robin’s pet theory about the interplay between LLMs, dreams, and books
    This is a must-watch for science-fiction enthusiasts, and anyone interested in the deep philosophical questions raised by LLMs and the way they function.
    If you found this episode interesting, please like, subscribe, comment, and share!
    Want even more?
    Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: every.ck.page/ultimate-guide-.... It’s usually only for paying subscribers, but you can get it here for free.
    To hear more from Dan Shipper:
    - Subscribe to Every: every.to/subscribe
    - Follow him on X: / danshipper
    Timestamps:
    00:00:00 - Teaser
    00:00:53 - Introduction
    00:02:47 - A primer on Robin's new book Moonbound
    00:04:05 - Robin's experiments with AI, dating back to 2016
    00:08:39 - What Robin finds fascinating about LLMs and their mechanics
    00:14:09 - Can LLMs write truly great fiction?
    00:27:19 - The stories built into modern LLMs
    00:30:50 - What Robin believes to be the central question of the human race
    00:36:38 - Are LLMs "beings" of some kind?
    00:42:26 - What Robin finds interesting about the concept of “I”
    00:49:40 - Robin's pet theory about the interplay between LLMs, dreams, and books
    Links to resources mentioned in the episode:
    - Robin Sloan: www.robinsloan.com/
    - Robin’s books: Mr. Penumbra's 24-Hour Bookstore, Sourdough, Moonbound
    - Dan’s first interview with Robin four years ago: every.to/superorganizers/tast...
    - Anthropic AI’s paper about how concepts are represented inside LLMs: www.anthropic.com/news/mappin...
    - Dan’s interview with Notion engineer Linus Lee: • Inside the Mind of an ...
    - Big Biology, the podcast that Robin enjoys listening to: www.bigbiology.org/
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 4

  • @mostlynotworking4112
    @mostlynotworking4112 หลายเดือนก่อน +4

    John Vervaeke and Jonathan Pageau have been part of many interesting AI convos

  • @xinehat
    @xinehat หลายเดือนก่อน

    Loved every part of this conversation. Probably my favorite episode yet.
    When Robin was speaking about the Stanford project that allows you to move fluidly through the embedding space it reminded me of the way I’ve been using Krea (incorrectly?) to co-create images. It basically gives you access to a kind of swimming pool of pure AI hallucination, which is fascinating. I use one to two word prompts, add images to the user side of the screen which AI then interprets based on the prompt. At that point you can dive into the pool and play and explore and move your images around pixel by pixel, leading to completely new hallucinations. You come across threads that fascinate you and speak to you and lead you in entirely different directions than you had been heading. I’d been in a creative block for years and it has completely broken me out of it. I think the models’ hallucinations are their strength, not their weakness and learning to guide those hallucinations is a “skill set” that isn’t talked about enough.

    • @EveryInc
      @EveryInc  29 วันที่ผ่านมา

      so glad you enjoyed it!

  • @eugeniocg3079
    @eugeniocg3079 หลายเดือนก่อน

    awesome