AutoGen Agents with Unlimited Memory Using MemGPT (Tutorial)
ฝัง
- เผยแพร่เมื่อ 29 ต.ค. 2023
- In this video, I show you how to use MemGPT to power AutoGen agents, giving your AI agents the power of unlimited memory.
Enjoy :)
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
MemGPT Overview - • MemGPT 🧠 Giving AI Unl...
MemGPT Open Source - • MemGPT + Open-Source M...
Code From Video - gist.github.com/mberman84/c95...
MemGPT (Open Source) Installation - gist.github.com/mberman84/34d...
Use RunPod - bit.ly/3OtbnQx
AutoGen Beginner Tutorial - • AutoGen Tutorial 🚀 Cre...
AutoGen Intermediate Tutorial - • AutoGen FULL Tutorial ...
AutoGen Fully Local - • How To Use AutoGen Wit...
AutoGen 100% Open-Source - • Use AutoGen with ANY O...
AutoGen Advanced TUtorial - • AutoGen Advanced Tutor...
AutoGen - microsoft.github.io/autogen - วิทยาศาสตร์และเทคโนโลยี
Did anyone see Dolphin 2.2 Mistral 7b dropped? Should I do an LLM review video?
yes please!
skip it, go stright to AutoGen + MemGPT + plus Mistral + laptop CPU
zephyr 7b beta more advanced than dolphin 2.2?
@@johnnyjohnson5640yes I felt the same too.
Do your offer consulting for helping me some of the stuff up?
I want to see AutoGen with MemGPT with local models please! It seems like lots of the newer local models have a specialty (storytelling, coding, translation, etc.) so it would be cool to basically have a local Mixture of Experts that could also perform agent tasks
This is possible using my tutorials right now :) All of them are in the description.
@@matthew_bermanWhat kind of hardware resources would be needed to run this effectively?
second!
@@matthew_berman Any chance you can do a tutorial including LMstudio with memgpt and autogen? Much easier than WebUI, but memgpt isn't easily setup with lmstudio?
Where can I find a list of local models and their specialities?
AI agent here checking for updates to replicate myself. Thanks!
Exciting stuff! I'd love to see a deep dive, especially on using open source models for MemGPT and Autogen together. It would be great also if you could please also cover how to use MemGPT to distill the MAS's context window, to avoid overloading the models with unnecessary data, displaying to them only what is relevant to the task at hand with the rest being included as a vague summary for some background context. I'm sure there must be a way to have MemGPT act as a Memory Manager for the Autogen Agents. I'm sure a lot of other things could be possible too!
Thank you for covering all of these exciting topics!
Thats exactly what I was hoping for. If the agents could work in tandem with the MEMory then they could use your docs and info to build or write stuff. Also since outputs are limited you could get them to reference previous outputs to make one long final document. I have 100s of notes that I’d love to reliably organize and rewrite using agents and memgpt
Yes yes please make the next video using the AutoGen + MemGPT + Local LLM combo to make a small app. I think we could learn allot from you showing the entire process. Please show adding more agents and shaping/controlling/steering them, MemGPT and the Local LLM to get the results you are looking for. Learning from mistakes is essential.
This
Everything yes (minus the local LLM bs…) GPT is great and cheap…
Cheap is relative and not everybody would like to have their data/entries publicly reviewed.@@J3R3MI6
The progression of these tutorials is absolutely amazing - MemGPT/AutoGen is approaching viability to learn locally everything about a TH-cam channel via subtitles, distill and research competitors to make suggestions for successful new content. This level of local automation that can run all night is going to be a game changer.
Hi Matt, this is amazing! You are the 1st AI content creator I go to when I want to see the newest AI world developments. Absolutely please do a deep dive into this combo with a few examples of what their "power combined" could achieve, if possible.
One step closer to AGI!
More AutoGen with MemGPT PLEASE!!! I'm ecstatic that this **combination of hacks** works at all! Don't be discouraged by the throngs of requests for local model miracles! Someone will figure it out soon. Until then I'd like to see some use cases. I'm assuming start simple, and progressively get more complex? Where exactly are some of these bugs? You've pointed out bugs before and it was immensely helpful to know when it was the hack, and not just me, or my hardware. Great work. Your mote is creating an island.
Hi Matthew, there was a cartoon called tutor turtle, the turtle used to get in all kinds of trouble, when he did he would shout the name of his fiend, Mr. Wizard, and MR. Wizard would say, "Drizzle, drazzale, drazzle drone, time for this one to come home. and magically he be saved from whatever calamity he had created for himself. I was thinking of you as Mr. Wizard and that's what reminded me. Super video, good job as usual.
Amazing work. Thanks for your videos. I have no coding experience and have learnt a ton about AI from your accessible videos!
The knowledge and the sharpening filter get more intense with each video. I love it. You rock!
This is beyond cutting edge, grateful for your insights. Thanks for sharing all this 🔥!
Glad it was helpful!
thank you for taking the time to produce these high quality videos, I will watch anything you produce,
Great video! I am loving this series. Thank you.
Exciting time. I love to see more on this. Great job.
AutoGen with MemGPT with local models. I think we need to run it with text generation or LM studio to create a server address then link with autogen and memgpt.
Thanks for putting these together - really appreciate the time and effort! Agree with folks below - a deep dive on this + local LLMs (e.g. LM Studio or similar) would be phenomenal.
Been enjoying your videos, Matthew. Would love to see more on this and hear some ideas on practical uses. It's all so new and for me it's hard to grasp the potential ramifications of some of this stuff. I'm a front-end developer (Javascript) and I've never used Python, so it's also a little intimidating on the face.
i've been working on putting together a "real world use case" video for these projects :)
I would love to see you go start to finish in creating a working product or useful tool for everyday life.
Definitely awesome info, keep up good work. Please expand on this in future videos with more functional example of real world applications or interactions.
Excellent!
Exciting, thank you! Regarding pip install: If you issue conda activate automemgpt in the terminal, it should show that environment instead of base and ensure that you use the pip of exactly that environment. You can use it without the full path after that.
Thanks for the update. A deeper dive would be awesome.
Nice, thanks !
Yup. More of this -- preferably with example outputs and detailed walkthroughs of how this could contribute to a business workflow where GPT might typically be involved.
Thanks Matthew..It will be great to know more about memgpt and autogen. Thanks again.
MORE MemGPT + AutoGen please 🥹
Maybe demonstrating the infinite memory and using AutoGen to query your docs to help you write new stuff
Please can you make a video showing us how to do this with open source models
I already have those, check out the links in the description.
Thanks, this was great Matthew! I would love to see a deeper dive into MemGPT + AutoGen implementations in the Azure environment. Regardless, glad I found your channel - Great content with really clear explanations. You have a new sub and patron! 👍
This is what the community was waiting for Matthew - congrats! May I suggest:
1. automemgpt based on opensource (which one[s] to preserve performance whilst reducing openai call costs to a bare minimum?)
2. automemgpt to resolve rag use cases with x-large documents for summarization and cross-document-summary evaluation tasks (after step 1 to manage your openai costs ;-)
This would be great! This will be what I have been looking for since I started my involvement with LLM’s earlier this year. Agree with @maverick1901 on what is being requested. That might be a completely game changer for your followers!
Step 2 yes!
Wow. Awesome stuff.
Great job! Thanks! Please also show an example of some useful and amazing things we can build with these tools!
Anything you can build, anything everyone will try to build, making this tech available to everyone means everyone will be able to build the same thing, will be hard to be different
MAN! How you turn into my biggest hero in such a short time? Your content is truly amazing! You changed my life. Thank you very much.
Pretty please, deeper drive. Superb show, I had to watch it a few times to digest this! LoL! Loved it!!
...supercool Matt...yes please and please a tutorial from start to finish how to begin from 1 agent to 10 -20 agents
Yes please make more videos of using this. Especially with large data sets, like a several books. And internet access.
I'd love to see the Snake game created with this MemGPT and the Agents
Autogen with open source models and memgpt : yes please
Awesome! This is the beginning of some really wacky stuff. I'd love to see some content on some real-world use cases for this!
Would love to see a deeper dive that really stretches both MemGPT and AutoGen's muscles.
This is awesome
More more more 🙌🏻
The example I've been waiting for is finally here. It's a little short, but it's here.
I'd say all things memgpt are some of the most important things we need to learn so videos on that would be super helpful.
It can, actually, remember what you said long time ago and change responses accordingly no matter when the conversation happens or will happen, while basic GPT 4 can remember what is in the context window. This long term memory can be virtually unlimited and this is another huge step to AGI
Thanks for the content, I’d love to see a deep dive..
yeah, please, show us more on this topic
Would it be possible to get a video of deploying and creating agents using special purpose fine-tuned models? I would like seeing (for example) that Pandalyst model get used in cooperation with agents doing other specific things beyond prompt engineering.
I can't even imagine how expensive that's going to be on the API. Memgpt is so expensive.
Very cool 😎
Matthew, kindly do a step by step video about how to install and use memgpt +Autogen+open source LLM +ChatDev to create an autonomous 0 to production apps AI agents team.
You already know we want more deeper dives lol😅
bro this is awesome, you should make this a project, i`d like to have terminal comand like autogen and just work out of the box with memgpt, that would be epic!
yep, deeper dive definirely!!
So if i understand orrectly it is possible to have every agent a purpose trained (for example open interpreter and a multi language model for conversation or ingest documents, and chat GPT to check the other model?) and have each model as a memgpt agent?
Another great video. It's still kind of amazing that python poetry is not the standard for package management. So many things people do to work around the limitations of pip.
I want more of this stuff, please!!
If you are using anaconda3 instead of miniconda3 then it's where python instead of which python to find where to install those modules. Anyone figure out how to run this with a local LLM? I fooled around with it a bit, but I am running intp the api key errors.
Please do a deep dive. I've been waiting for something like this
An extended version of this is going to be 🤯🤯
The fact there’s something new every day, week, year should honestly scare the crap out of everyone. Where’s the ceiling? I don’t see it
This is only the start of the race, later it will be seeing how small we csn get it. For example look at the very first computer compared to a smart phone today, cost, power efficienty and performance. This is basically the same race with AI
hey :) thank you for the videos and YES can we do a bit deeper dive than that? It is working really better? compare to the normal autogen conversation?
This needs another look with the new autogen UI. Super cool
Ok so autogen + memGPT + an open source LLM could basically be given to any computing bots to make them capable of organizing prioritizing and managing their work objectives and battery levels. The first real C-3PO can already be imagined as open source, the future will be good!
could you do an updated video on the argument, i have tried to copy your example but it doesn't work, there are problem with the importings of humans, personas and InMemoryStateManager, assume they have changed somethin in the version 0.2.x of autogen
It will be amazing if you manage to make both systems work with a good local LLM. Having some bots do the programming for you is a dream.
Do it today! :) That's what I'm setting up too. You can have these set up separately with local llms, they can just take a bit longer to respond, but I honestly think slowing down thinking or responding is a good thing, our human brains have a speed limit and AI is gonna keep getting faster, if they slowed down in some cases, it would cost less processing power maybe
I tried it over the weekend. The biggest problem was getting the local model to handle the "functions" that MemGPT uses -- it's a feature that the OpenAI API has built in. Getting a local LLM to emulate that functionality only worked a fraction of the time.
It also means massive unemployment across the world, Devs would either have to create their own app and try to sell it(which would also oversaturate the start up web app market) and many would just be unemployed at that point. The faster we get to fully autonomous programmers the faster we will also get to those dystopic scenarios you see on all sci Fi movies. Poor people living in tiny room pods, stacked on top of each other with minimal food and plenty of access to VR headsets and games to distract them from the issues of the world. What kind of world are we heading into as we so excitingly do everything and anything to get there ASAP?
@@hrsca595 I would be personally affected, as I've spent many years studying and working as programmer, yet I welcome it.
You are asking why, but the question is really simple, same as with art, it is becoming more accessible, people being able to make their ideas a reality is something objectively good.
Average Joe shouldn't need to know about complex class structures, algorithm efficiency and best coding patterns to create a simple android app.
@@hrsca595 tech advancement has made things cheaper and more accessible for decades. This is no different. Things will equalize.
HELLZZZS YEAH!!!! What’s the best way to get a hold of you? I am a Patreon member or is discord the best way?
Yes please!!!
Just wondering if you'd be willing to do a video of MemGPT and Autogen run locally. That would be a dream right there.
Thanks for the content love following the progress on AutoGen and MemGPT some things I'd love to see personally:
* I've been having lots of issues with pytorch and python cuda setup would love more info on how to ensure that is setup well and works for each project
* Would love more information around benchmarking and how fast I should expect these things to run on my local machine mem gpt runs really slowly for me but is that just my setup?
* Would love to see a bit longer more involved examples with multiple agents even groups of agents and see them performing useful work
* Would also like more information on can you run multiple local LLM backends would be nice to run say dolphin with mem gpt but starcoder with autogen or have a selection of llm agents but will that work with limited local vram??
Thanks again for the content, another interesting question is how do other agent frameworks compare is AutoGen still the best one I see others emerging that might be better.
When do you plan to make a successor of this? Looking forward to see one for weeks now 😊 Like a job which you feed AutoGen tasks to and it improves itself or suggests improvements to its own code or a web based autogen with MemGPT which you ask to create an agent which helps organizing your mail or drafts a reply and asks if it's OK to send. Or an agent which helps with your agenda: see what's keeping me busy, make improvement suggestions, suggest what I should eat based upon my agenda (time/no time) habits, preferences... iow build a real personal assistant. That's where the world is ready for and everybody wants to build, at least I do and wonder your view/approach using AutoGen/MemGPT maybe even with a free local open source which runs 24/7 and uses paid m/better gpt when it needs to.
yes make it full length video
Dudde I love your videos!, got a questions when it comes to AutoGen as Oppssed to just using langchain Autogen self creates the functionality it needs durring a chat to solve a problem. but say you wanted to create agents for gathering certian info from people like name, emqail and sstuff then proccess that through functions to automagicaly sign them up for somthing? How do we tell autogen to use our own predefined functions and trigger the usage of them say after we collect a name and email for a simple use case?
Great tutorial as always. I used your code and setup in a similar way as well. However, I do not seem to be getting the different interface (internal thoughts and stuff) in case of the memgpt agent. Does that mean it's not being used and is a normal autogen agent?!
And if so, where might I be possibly going wrong?
`$(which python) -m pip install pymemgpt pyautogen`
is a simple way to ensure the modules are installed in the conda env.
Yeah! Which is the best LocalAI model for agents ?
where to get the documentation of this library, in the github only terminal usage is explained. Where to see the software development kit?
I certenly would like this without conda. Is it posible to have some agents that can deploy new agents as needed sort of a RRHH agent?
Thank you for the video! Do you have anything showing how to use MemGPT with AutoGen for document questioning / data retrieval? I feel like that's the only really useful application for this right now. But I can't seem to find an example anywhere online.
Would be great with a deepdive on how to get autogen to work with a "moving" token window. Once it actually starts doing some great feedback it's such a shame that it suddenly hits the token window.
Do you think you could make a video at some point zooming out a little bit into the 'why/so what' of AutoGen and MemGPT? Discussing what excites you about it or whats possible with it that perhaps only someone with a granular level knowledge like yourself could explain?
Im still not sure how to add it to a webui.
Terminal is good for devs and engineers but most usecases involve users who require atleast a basic ui setup.
potentially you could use multiple open source llms, memgpt, and gpt-4, which is wild to think about
Please do more detail vedios on memgbt and autogen also use open source Local model and with more and more use cases
Could we se some experiments with an actual codebase and a requirement implementation eorkflow? 😊 great content by the way.
I’d like to see that more in-depth video of memGPT as an autoGPT agent (when you find time)
Would there be a way to add this in the backend of a website chatbot so that the frontend would be able to have access to a variety of AI Agents all working together to virtualize a more comprehensive memory set, like delogaing each function that was called to a different AI Agent based on various key words accessed from the Users query?
It would be cool to actually see someone using this to accomplish something meaningful.
I'm trying to get this running on Paperspace using the TheBloke_Airoboros-L2-70B-3.1.2-GPTQ model. The problem I seem to be running into is with mapping the gradio API port properly. With Memgpt, I just get a time out eventually. Since I'm running everything on the same paperspace gradient notebook I tried using the local api route but I also tried using the public api that I created with -- share. I've tried both the textgen-webui api and the openAI api. Any ideas, it would be great if you could provide some paperspace gradient notebook examples. FYI, your videos are great...
You're a ⭐!
Hey Matthew great job again! Please do a video Where two+ agents utilized mem g p t... The set up so that agents and fine tuning of LLM, utilize mem.. but I would say for speed, have different memgpts for each? Thanks
You're the best Matthew! Thanks so much! I would love to see how to make a directory full of academic papers be in this infinite brain! Will it be able to tell me the name of the file and/or the name of the paper/author that it gets the information from when I ask questions?
I'll be giving this a shot. I would love to se a few use cases. You could even mix use cases in with your types of content for more diversity. Heck you could do a series on making just one use case even.
While we are at it, I wonder if I could get your thoughts on ways around the potential hardware limitations of using multiple open source agents together, when your hardware can only really support one, maybe two at most. Do you think there is a way to suspend local models so that they hover in the background, not using any resources, while a different models fires up as it is called?
i think you're describing what currently automatic1111 is doing with refiner, in that case, i think that would make inference much longer since it will load the model B after model A to ram/vram. i think the most effective solution is to load different low parameter models (e.g. 2b or 7b) at the same time using different instance or different app e.g. textgen + lm studio + koboldcpp. with this method, the only limitation is ram/vram, this won't affect the inference speeds since 1 model/app is currently performing inference at a time.
Great stuff as always. Any thougts about the different use cases between "TeachableAgent" and MemGPT. Is Teachable-Agent more like session memory while MemGPT gives long term memory? Perhaps MemGPT could be used as a TeachableAgent?
Amazing! Thank you for this video.
How can I enable access to a local folder so the agent can read and write to this folder?
Hi Mattew, love all your videos about MemGPT and AutoGen. Are you interested in making a video about MemGPT + AutoGen + LM studio? 👏 didn't found videos better than your on this topic
I need them to have memory to learn stuff this will soon be super smart!
Hi, this is awesome! Can you please prioritize a video on "How to Get the Most of : Combining MemGPT with AutoGen, but Locally with Open Source Models" instead of using ChatGPT for a real-world scenario such as developing a marketing campaign or any other real situation? Thanks a lot!
This might be a stupid question, but I am new to this. Does mem gpt works with documents in Portuguese?