LLMs as Operating Systems Agent Memory | Understanding MemGPT

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ม.ค. 2025

ความคิดเห็น • 6

  • @evanfollis9733
    @evanfollis9733 5 วันที่ผ่านมา

    I love the foundational approach. I have constructed a very similar platform more targeted for multi-agent systems. I’d love to compare thoughts/ideas.
    Where can I access the “more complex applications” that you reference at the end of the video?

  • @jmg9509
    @jmg9509 22 วันที่ผ่านมา

    Amazing Architecture! It answered the question I had in the previous video of storing not currently relevant information in an archive outside the context window instead of outright overwriting/deleting it. I wonder how latency would hold up as the recall and archival dBs grows.

  • @therealsergio
    @therealsergio 19 วันที่ผ่านมา

    Optimal context compilation is non-trivial. I think one of the foundational memories of an agent must be the documents it can create and manage via a tool. Via a tool versus via some proprietary native memory mechanism of agents. We need federated knowledge management. One of those documents should be the system prompt itself, and an LLM should be able to reflect on its own prompt and suggest and test prompt improvements.

  • @CreCharline
    @CreCharline 23 วันที่ผ่านมา

    Thanks for the breakdown! Could you help me with something unrelated: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?

  • @I_am_who_I_am_who_I_am
    @I_am_who_I_am_who_I_am 21 วันที่ผ่านมา

    There are people who don't have inner thoughts... and they're actually better at thinking. Just my 2 cents.

    • @vitalyl1327
      @vitalyl1327 19 วันที่ผ่านมา

      We do have inner thoughts, we don't have inner monologue. We're not thinking in spoken languages, but these are still languages - just higher level and more complex than the natural languages.