Last Week in AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ม.ค. 2025

ความคิดเห็น • 10

  • @returncode0000
    @returncode0000 4 วันที่ผ่านมา +1

    1:17:00 this is the first time someone has explained the gist of a paper in his own words and I immediately understood what is meant, brilliant. Totally underrated podcast!

  • @Rbyn
    @Rbyn 5 วันที่ผ่านมา +1

    Of course we watch to the end! At least I do! No one gives insights to the news like you guys do!

  • @oimrqs1691
    @oimrqs1691 7 วันที่ผ่านมา +1

    First time listening to you guys! Pretty nice!

  • @burnytech
    @burnytech 7 วันที่ผ่านมา

  • @dustinwehr2433
    @dustinwehr2433 5 วันที่ผ่านมา +1

    Why didn't you mention Quiet-STaR (not to be confused with "Q*") in the coverage of Coconut? It was so slick... very underappreciated IMO. Their reinforcement learning algorithm results in a model that thinks (outputs thought tokens) in gibberish. And they DON'T use special chain-of-thought data; just next-token prediction with the same data used in pretraining. The thought tokens start as natural language, since you start with a normal pretrained model, but they evolve to gibberish as the model gets better. IIRC they didn't call it a "continuous latent space" and generally didn't do much of the usual this-is-a-groundbreaking-paper signaling (e.g. give a convoluted Bayesian derivation and no simple explanation, so the ideas seem more sophisticated). But yeah, pretty much a continuous latent space.
    Anyway, love your show!

    • @dustinwehr2433
      @dustinwehr2433 5 วันที่ผ่านมา +1

      IIRC, they pointed to making the number of thought tokens per output token variable as an essential open problem. For some fixed k, the model they train "thinks" for k tokens before *every* output token, so the inference time compute is k times greater than the base model; very costly. Obviously that's not optimal since most output tokens are easy to predict.

    • @lastweekinai
      @lastweekinai  5 วันที่ผ่านมา

      That's a good point! Honestly we probably just did not think to - there's been a ton of cool research in the general domain of LLM reasoning over the past year, there was lots we could have gotten into... Nice to see that the Coconut paper did cite Quiet-Star though.

  • @burnytech
    @burnytech 8 วันที่ผ่านมา

    biggest week

  • @SearlasMoriah
    @SearlasMoriah 7 วันที่ผ่านมา

    Appreciate the detailed breakdown! Could you help me with something unrelated: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How should I go about transferring them to Binance?

  • @likhitkalla
    @likhitkalla 8 วันที่ผ่านมา

    How is this podcast so underrated