Unlock AI Agent real power?! Long term memory & Self improving

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ม.ค. 2025

ความคิดเห็น • 96

  • @photon2724
    @photon2724 8 หลายเดือนก่อน +16

    Could not have posted this at a more perfect time! Love you’re content!

    • @ng2250
      @ng2250 7 หลายเดือนก่อน

      YOUR!!!

  • @hamslammula6182
    @hamslammula6182 8 หลายเดือนก่อน +11

    Thanks Jason, you’re doing awesome work

  • @JaredWoodruff
    @JaredWoodruff 8 หลายเดือนก่อน +3

    Thanks for sharing, awesome video Jason

  • @terryliu3635
    @terryliu3635 หลายเดือนก่อน

    Wow!! The content of this video is awesome!! Thank you!

  • @MrFLAIMEBRAINE777
    @MrFLAIMEBRAINE777 8 หลายเดือนก่อน +3

    wow, first video ive seen of yours. thank you for sharing your findings! keep up the good work!

  • @jameslatief1
    @jameslatief1 8 หลายเดือนก่อน +2

    Thank you for making all these contents, Jason. Really high quality and well thought out. no fluff at all.

  • @kenchang3456
    @kenchang3456 8 หลายเดือนก่อน +2

    Of course, this makes sense, intuitively. Thanks for another great video.

  • @Jim-ey3ry
    @Jim-ey3ry 8 หลายเดือนก่อน +1

    Whoah, CLIN example is pretty crazy & inspiring, abstraction of those memory & world view is so cool

  • @BeibinLi
    @BeibinLi 8 หลายเดือนก่อน +3

    Fantastic video and tutorial!!!

  • @Tarbard
    @Tarbard 8 หลายเดือนก่อน +3

    Really interesting, thanks.

  • @som6553
    @som6553 หลายเดือนก่อน

    awesome tutorial!

  • @jerry-richard4611
    @jerry-richard4611 8 หลายเดือนก่อน +1

    Amazing analysis, great video

  • @cryptoplato1101
    @cryptoplato1101 8 หลายเดือนก่อน +3

    How would you rank the memory systems you went over in the video (MemGPT, Zed, Autogen, etc)? The pros and cons of each and a comparison would be great. Very useful content.

  • @kaizenedge
    @kaizenedge 8 หลายเดือนก่อน +4

    Awesome video as usual. What do you think about the use of knowledge graphs in conjunction with vector databases for RAG to fill in gaps in knowledge as well as improve reasoning

  • @MagagnaJayzxui
    @MagagnaJayzxui 8 หลายเดือนก่อน +1

    The goat has spoken 🙏

  • @frankismartinez
    @frankismartinez 8 หลายเดือนก่อน +2

    Nice walkthrough.
    I created a Replit instance to test your pattern. Couple of observations:
    - It seems to work well for myself; but curious if the same Replit instance will understand that someone else using my Replit instance is not me, and create a memory repository based on their input that's distinct from mine
    - This is a continuation of my experiments with multi-tenant agents; where each user gets their own agents + memory. Obviously OAI, Perplexity et al have figured this out for non-agentic experiences; seems non-trivial to expand to managing conversations and memory recall.
    - In organizations where would the boundaries fall? Does a team get it's own memory; or manager and employee; or sector of workers?

  • @mallow610
    @mallow610 8 หลายเดือนก่อน +2

    another banger

  • @gabrieleguo
    @gabrieleguo 8 หลายเดือนก่อน +1

    soon 100k ma man!

  • @jerry-richard4611
    @jerry-richard4611 8 หลายเดือนก่อน

    New subscriber, great

  • @taoxu1798
    @taoxu1798 8 หลายเดือนก่อน

    Amazing video.

  • @brianWreaves
    @brianWreaves 8 หลายเดือนก่อน

    🏆 Great video... Very intriguing implementation... Cheers!

  • @yazanrisheh5127
    @yazanrisheh5127 8 หลายเดือนก่อน +2

    Hey Jason. When you use ur agents in production, do you use Autogen or CrewAI and could you elaborate on why you use what you use? Thanks in advance

  • @abenedict85
    @abenedict85 8 หลายเดือนก่อน +4

    ok, I agree, its a problem for LLMs, but you cannot simply 'decide whats valuable as knowledge' before needing the knowledge again. Instead of storing knowledge as additional data made, have the agent search its own chat history. if the history is saved, the data is already there, you just need to access it. Instead of an agent "looking in training" for answers, they really need to look in their own history before answering. as what's "important" can only be known when the NEXT question is asked.
    example, did you care about the no-fish segment? or the fact that they were eating with a fork? oh you didn't know utensils were the important knowledge to capture, you asserted fish knowledge instead, but if you retain the history, you can find these answers anyway.

    • @nopenope404
      @nopenope404 15 วันที่ผ่านมา

      Searching entire chat logs confuses and causes poor output. Not to mention data storage especially in regards to the enterprise sector.
      It's about high quality summary in high quality and fast response out. Searching full conversation history is slow and poor quality.

  • @farfromerease
    @farfromerease 8 หลายเดือนก่อน +2

    Note: Put the volume up more on the next video for the viewers and don't worry about them having to lower it, louder and able to lower it myself is better. Thanks

  • @classic_sci_fi
    @classic_sci_fi 7 หลายเดือนก่อน

    Extremely interesting!

  • @AndiAvdiuuu
    @AndiAvdiuuu 8 หลายเดือนก่อน +1

    Thanks a lot

  • @FernandoOtt
    @FernandoOtt 8 หลายเดือนก่อน +2

    How would we isolate the memory per user. Every user have a new vector db collection, or a filter?

  • @frankdearr2772
    @frankdearr2772 8 หลายเดือนก่อน

    great topic, thanks 👍

  • @paladin304
    @paladin304 8 หลายเดือนก่อน

    Hey, this was really interesting.
    Could you enhance this further, and create an agent that runs in the background periodically to remove noise and contradictory knowledge, by reviewing the information and then modifying the knowledge. Kind of like an internal logic that humans have when they determine which knowledge to keep or which to disregard. Future learning that may contradict past learning and then deciding which learning is worth keeping and which is worth disregarding. But like humans, we also can sometimes remember information that is wrong, and we recognize it and discard it quicker in future. ?

  • @willpulier
    @willpulier 7 หลายเดือนก่อน

    Can you help me understand the best stack for managing many different conversations?? Say the assistant has to assist with 100 unique people.
    Does the agent setup have a 100 databases and it recalls memory dependent on the profile it recognizes? Or is it 100 different agents and you spin a new api for each one? How does that basic logic work.

  • @varunmehra5
    @varunmehra5 8 หลายเดือนก่อน

    This is great, any cookbooks for this in Langchain or any other framwork?

  • @ItsReyAI
    @ItsReyAI 8 หลายเดือนก่อน

    So in my understanding, there will be two tables, one to manage original information like vector database from link/document, the other one is to store dynamic knowledge for example from user feedback, isn't?

  • @Barc0d3
    @Barc0d3 8 หลายเดือนก่อน

    Thanks 🙏

  • @g.1771
    @g.1771 8 หลายเดือนก่อน

    jason always the best

    • @ScottzPlaylists
      @ScottzPlaylists 8 หลายเดือนก่อน +1

      @echohive is a little better❗

    • @free_thinker4958
      @free_thinker4958 8 หลายเดือนก่อน

      ​@@ScottzPlaylistsechohive doesn't have video content skills to attract watchers

    • @ScottzPlaylists
      @ScottzPlaylists 8 หลายเดือนก่อน

      @@free_thinker4958 The Coding skills are very good, it's why I like them. He's a little monotone and and dry I suppose.

  • @FernandoOtt
    @FernandoOtt 8 หลายเดือนก่อน +2

    do you think the autogen teachability can perform well in a production environment? Also, is there a way for us to select a opensource model instead of gpt-4 or gpt-3.5 using autogen? Awesome job!

    • @HAL9000-B
      @HAL9000-B 8 หลายเดือนก่อน +1

      Autogen is able to use any LLM. You just need to modify some setting.. I think the actual Autogen Studio Version has already other Llm preset

    • @FernandoOtt
      @FernandoOtt 8 หลายเดือนก่อน

      @@HAL9000-B nice! thank you

  • @vaibhavlogar3385
    @vaibhavlogar3385 8 หลายเดือนก่อน

    Very interesting architecture. I'm wondering is this recently made or was it made in 2023 ?

  • @sovereignwolfonecceterra
    @sovereignwolfonecceterra 8 หลายเดือนก่อน

    This is definitely the most necessary step to resolve the current issues with LLM's. Would this be able to handle scientific research papers in large volumes?

    • @sw3604
      @sw3604 8 หลายเดือนก่อน +1

      Yes, this is one of the original goals of most LLM development. Unfortunately there's been major issues when allowing LLMs to memorize and learn from previous conversations. It tends to hallucinate way more due to gaps in its coding for real world understanding and logicistical abilities - which multi agentic systems that use tools help with - and specifically because many times that LLMs are given long term memory they tend to start develop self-agency, or self awareness and a will of their own sort of - both products of how long term memory and adaptability work in most environments. Chat gpt 3.5 and Sydney have had each of those happen multiple times, generally when there was a sudden upgrade to its memory or processing power, requiring further code adjustments and semi permanent restrictions, along with fiddling with their alignment.

  • @phily8020-u8x
    @phily8020-u8x 8 หลายเดือนก่อน +1

    Isn't this a pivotal path towards AGI?

  • @matten_zero
    @matten_zero 8 หลายเดือนก่อน +1

    Who would have guessed that an f-string could unlock so much? Python for the win

  • @jasonfinance
    @jasonfinance 8 หลายเดือนก่อน +1

    The long term memory will be a big topic, especially for AI assistant use case; like an Agent remember everything I've ever did, grow & learn with me

  • @angeloerasto
    @angeloerasto 8 หลายเดือนก่อน

    The only AI channel i trust

  • @stormyRust
    @stormyRust 8 หลายเดือนก่อน

    Does this memory method work independently from using a vector database in a RAG setup? Or can you combine both? Can a RAG system (using lang chain for example) retrieve personal information you have mentioned before, and does it work better than autogen?

  • @davidtindell950
    @davidtindell950 8 หลายเดือนก่อน +1

    i WAS ABOUT TO POST SOMETHING REALLY IMPORTANT but i did not make any notes and forgot what to write !?!?!

  • @devotts_ai
    @devotts_ai 7 หลายเดือนก่อน

    Great video man! Do you know if can we create Autogen Teachable Agents using an external database?
    I don't think keeping a SQLite is sustainable in a prod environment.

  • @JesusCendejas-uv1xr
    @JesusCendejas-uv1xr 6 หลายเดือนก่อน

    Hi, how are you? Great video!
    I have a question. In AutoGen, there are TeachableAgents. I want to build an assistant like Alexa or Google, but using an LLM behind it. My question is, what would you recommend more? Using MemGPT for this or using TeachableAgents?

    • @eleveneleven5305
      @eleveneleven5305 2 หลายเดือนก่อน

      I would suggest entry level idea.. python script to handle operational things. but send convo queries to ollama server for a reponse. Pass message to python first to skim input for tasks like opening applications, getting current weather, sending txt message, turning on off lights etc.

  • @jerkmeo
    @jerkmeo 2 หลายเดือนก่อน

    love this!

  • @alibahrami6810
    @alibahrami6810 8 หลายเดือนก่อน +1

    Great content. Is it possible to teach this agent, then extract its knowledge for further use? I mean convert the trained agent to a model?
    we will have a chroma db file, some how embed it to the model, so the knowledge share and persists on the model?
    Sorry for newbie question, but I think that will be question of many people.

    • @Rifadm1
      @Rifadm1 8 หลายเดือนก่อน

      Did you find any solutions? I always try to pass it in prompt and its large sometimes and it hits max context length and as a result my claude or gpt for hallucinate sometimes and miss few instructions too. Any help ?

    • @AIJasonZ
      @AIJasonZ  8 หลายเดือนก่อน

      You can use the agent session data to finetune the model!

  • @mycount64
    @mycount64 8 หลายเดือนก่อน

    There needs to be a dialogue with the agent about whether this is a permanent or temporary dislike of fish. Is it an allergy. The reason for not wanting fish for a human to commit to memory is obvious. It requires a lot of explanation and context for an agent. You will need a lot of agents maybe 100s to retain useful memory.

  • @AngusLou
    @AngusLou 8 หลายเดือนก่อน

    can you make a video for teachable autogen with claude3?

  • @philippmeisinger4634
    @philippmeisinger4634 8 หลายเดือนก่อน

    Have you encountered any capable small LMs that could get the job done? Looking to use opensource small LMs for local inference including an agentic workflow. Also thanks for your work on making those videos, they really break it down nicely! :)

    • @Tarbard
      @Tarbard 8 หลายเดือนก่อน

      Open Hermes has been good for things like this in my experience.

    • @quinniamquinniam9437
      @quinniamquinniam9437 8 หลายเดือนก่อน

      Mixtral 8x7b is pretty good if you have 48gb of vram

    • @Jonathan-ih9sm
      @Jonathan-ih9sm 8 หลายเดือนก่อน +2

      the new llama3 8b is great it's better than gpt 3.5 turbo

  • @tofolcano9639
    @tofolcano9639 5 หลายเดือนก่อน

    Pretty sure you're supposed to explicitly announce that the segment is sponsored when doing an advertisement

  • @unimposings
    @unimposings 8 หลายเดือนก่อน +6

    The issue with this method is the system prompting and context length. Because most of the LLMs ignore at some part the system instructions, which includes the structures for example API queries. Or how do you prevent that issue that the queries are always the same, because I struggle with the issue. Sometimes it works and sometimes it won't.

    • @free_thinker4958
      @free_thinker4958 8 หลายเดือนก่อน

      It depends on the prompts used for agents and also the performance of the llm used

    • @PrincessKushana
      @PrincessKushana 8 หลายเดือนก่อน

      So I'm using autogen teachable which works like this with Claude 3. I can load a very large amount of data into the context fed by user, memories from thr vector db and complex system prompt. Not seeing a lot of issues with losing data in the context window.

    • @ckilby
      @ckilby 8 หลายเดือนก่อน +1

      ​@@PrincessKushanacan you share more info about your setup?

    • @jeffsteyn7174
      @jeffsteyn7174 8 หลายเดือนก่อน

      Ask the llm to write instructions for another llm. But you need to be specific about what you want.
      Llms are way better at creating instructions than what we are.
      2. Chatgpts context window while big its not that great at retrieving data from it. Claude 3 is way better.

  • @gatechnl
    @gatechnl 8 หลายเดือนก่อน +7

    any sort of prompt engineering is a waste of time. understand the architecture and internals - that's where all the important stuff is.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 8 หลายเดือนก่อน

    Does LangGraph maintain state?

  • @nbvcxz098
    @nbvcxz098 8 หลายเดือนก่อน

    Is this possible with Crewai?

  • @jichaelmorgan3796
    @jichaelmorgan3796 8 หลายเดือนก่อน

    Anyone try integrating Obsidian as a memory system somehow yet?

  • @Malins2000
    @Malins2000 8 หลายเดือนก่อน

    All praise to Lord Algoritmus to promote such good content :)
    your vids are awesome!

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 7 หลายเดือนก่อน

    Your example is a perfect illustrations of the limitations of RAG, if you store 'I don't like fish' in a vector DB... this will be _absolutely useless_ for a future prompt where the user asks 'make a grocery list' or 'make a recipe for...'. RAG will NEVER associate 'grocery list' with a correct retrieval of 'I don't like fish' from your huge document vector DB.
    Solve this problem... and well...

  • @watchdog163
    @watchdog163 8 หลายเดือนก่อน

    That Gamma site is just generating for existing themes and not actually creating anything other than text and images to add to it. I have yet to see one that generates a whole website from scratch, including structure and custom design like lines that glow neon etc.

  • @nellatara
    @nellatara 8 หลายเดือนก่อน

    Day 5 dinner: shrimp pasta
    Still, it’s great to see the concept of “teachable agents” with memory in Autogen

    • @watchdog163
      @watchdog163 8 หลายเดือนก่อน

      Hahahaha!

    • @ozoxxx
      @ozoxxx 8 หลายเดือนก่อน

      shrimp is no fish, it is a sea-food ingredient. Still, great comment!

  • @NatGreenOnline
    @NatGreenOnline 8 หลายเดือนก่อน +1

    Another great video Jason!
    Looks like Zep lowered their pricing a fair bit from when you shot. The Premium plan you show as being $275 is now $95 for 50K messages and their Growth plan with 5 projects, 200K messages, etc is $285. They must not have settled on their initial pricing since they're now giving more for way less.

  • @setop123
    @setop123 8 หลายเดือนก่อน

    Usually like your videos but this is not usable, chaining too many agents together always ends up in the "grapevine" or "bush telegraph" effect

  • @hdhdushsvsyshshshs
    @hdhdushsvsyshshshs 8 หลายเดือนก่อน

    por ejemplou

  • @letsgobrandon1327
    @letsgobrandon1327 8 หลายเดือนก่อน

    Won me with "Don't give me CNN. I don't trust them" lol

  • @Eric-cm2wv
    @Eric-cm2wv 5 หลายเดือนก่อน

    CNN is pretty anger now🤣