Hi Mervin, what about postgres memory ? is it a long term memory ? something like autogen teachable agent or memgpt ? thx again for your amazing content !
@@phidata great! Amazing! I don't know if it will be possible, but I dream of a long term management system in a sql like database with autocreation of tables for topics which will be filled in by the agent when he found relevant info to keep (for example preferences, backstory of the user, the company, personals data's, ideas and thought, etc.) this kind of memory would be very helpful for all kind of assistants, from office to psychotherapist, coach, etc etc and maybe a runtime to reorganise all the database when it's needed... And if all that can be user session managed, it will be the perfect framework for new kind of agentic system, if you see what I mean... Unfotunaly I don't have enough coding skills to build that or to help building that.. Thanks a lot @phidata for.. Phidata ;) very great job and very great gift for the world!
@@christopheboucher127 this is truly amazing! im coding the personalized memory piece right now and your message was like the AI gods speaking to me showing me what to build. Thank you! Cannot express how much I appreciate this guidance, thank you
Would it be possible to add approved chat comments to the local knowledge base? Or is that automatic via the postgres storage? Also, can PraisonAI automatic multi-agent creation be added vs. manually/programmtically defining all of the agents/tools in LLM OS?
Holy Shit Marvin I’m impressed . You’ve just shown how with a few python libraries and some code you can hook into os file system and create agents with specialised knowledge . If I am not mistaken this could run on any embedded hardware with Linux os and a 5G or WiFi connnwction because your using an open ai api call Which means you could extend Ai to edge computing right now to all the millions of connected edge devices out there Kudos man 👏👏👏 I’m going to try this on some hardware I have at work
Here is how you create AI OS from SCRATCH => Python …. 😂😂😂😂😂😂 What’s next? Building AI rockets from SCRATCH with Python? 😂😂😂😂😂😂 Please stop the nonsense and start educating people with real stuff. Yes Python is installed by default in OS but it‘s not used to program the OS.
Love to make something similar, however I would seek the foundational LLM to actually be a local SLIM and call for larger models if needed. I would wish to make it local GPU agnostic/unneeded. Modular GPUs can be added on prem or called from providers. My idea is not in any way superior on the face of it... just an iteration/extrapolation on a similar idea. Thanks.
It's funny in the film Her the fictional OS1 appears to take up the whole screen. It does later show documents and other UI elements, so I wonder if it's fair to call it an OS. I will say Apple and Microsoft need to get ahead of this and start allowing LLMs to control their desktops. My guess is both are working feverishly on this. In a few years you won't need to know how an email app like Outlook even works to send and receive email, or to create and update spreadsheets, your AI OS will handle that for you, just tell it what you want in there.
@@helix8847 im actually a big fan of that because he explains it much better than me :) hope he continues to do that. Mervin has a way of communicating complex information and i learn a lot about my own work when he makes a video
Legit question, why not just use the embed models like Nomic for example, chatting with my LLM I learned these vector "memories" create neural connection cells/nodes or whatever and it connects to these vector memories , meaning it's knowledge and memories sort of expands..
How does PhiData relate to CrewAI and PraisonAI? Would we use them all separately and independently? Or do they work together somehow? If they are independent, which do you recommend and why?
1. Start with crewAI 2. Find it was a waste of time 3. Move on with your life 😅 No need to blow your mind with more complex stuff to see this stuff provides zero value CrewAI is perfect and a little bit illegal with how simple it is.
Yes it’s true LLM’s have an OS under it in the same way an ATM or a parking meter have an app running on top of a operating system like windows embedded or embedded Linux but I think you missing the point . LLM’s are black boxes they have no contact with things outside their domain . LLM Os is a concept and Marvin has demonstrated how you can implement that concept and giving it hooks into your OS to access files and special agents . In this video Marvin uses an OpenAI api key which means you are making a api call to Open Ai servers . So you could run this as is on a barebones Linux system with a WiFi or 5G connection and run it on raspberry pi or higher end beaglebone black . If you were going to replace the OpenAI LLM component with an open source Llm like Grok or llama then you are correct you would need a lot more memory and compute power not to mention a gPU
I rewatched this. Now I think that the term from scratch is misleading. Now I think that from scratch should start with „open a new python file“.
Out of all the AI tools and frameworks you’ve used, which one(s) do you find to be the most useful and have the most promise moving forward?
Really the absolute BEST AI presentations and development around! Thanks!
Hi Mervin, what about postgres memory ? is it a long term memory ? something like autogen teachable agent or memgpt ? thx again for your amazing content !
Postgres right now is storing chat history -- but chatgpt like personalized memory is in the works :)
@@phidata great! Amazing! I don't know if it will be possible, but I dream of a long term management system in a sql like database with autocreation of tables for topics which will be filled in by the agent when he found relevant info to keep (for example preferences, backstory of the user, the company, personals data's, ideas and thought, etc.) this kind of memory would be very helpful for all kind of assistants, from office to psychotherapist, coach, etc etc and maybe a runtime to reorganise all the database when it's needed... And if all that can be user session managed, it will be the perfect framework for new kind of agentic system, if you see what I mean... Unfotunaly I don't have enough coding skills to build that or to help building that.. Thanks a lot @phidata for.. Phidata ;) very great job and very great gift for the world!
@@christopheboucher127 this is truly amazing! im coding the personalized memory piece right now and your message was like the AI gods speaking to me showing me what to build. Thank you!
Cannot express how much I appreciate this guidance, thank you
Would it be possible to add approved chat comments to the local knowledge base? Or is that automatic via the postgres storage? Also, can PraisonAI automatic multi-agent creation be added vs. manually/programmtically defining all of the agents/tools in LLM OS?
Holy Shit Marvin I’m impressed . You’ve just shown how with a few python libraries and some code you can hook into os file system and create agents with specialised knowledge . If I am not mistaken this could run on any embedded hardware with Linux os and a 5G or WiFi connnwction because your using an open ai api call Which means you could extend Ai to edge computing right now to all the millions of connected edge devices out there
Kudos man 👏👏👏
I’m going to try this on some hardware I have at work
Awesome Dear. Thanks for sharing. One Que - Can we use LLAMA3 instead of GPT4 ?
Here is how you create AI OS from SCRATCH => Python …. 😂😂😂😂😂😂
What’s next? Building AI rockets from SCRATCH with Python? 😂😂😂😂😂😂
Please stop the nonsense and start educating people with real stuff. Yes Python is installed by default in OS but it‘s not used to program the OS.
If I would give a score for this spectacular video in a scale from 1 to 10 .. I would give you 20! Well Done and Many Thanks
Love to make something similar, however I would seek the foundational LLM to actually be a local SLIM and call for larger models if needed. I would wish to make it local GPU agnostic/unneeded. Modular GPUs can be added on prem or called from providers. My idea is not in any way superior on the face of it... just an iteration/extrapolation on a similar idea. Thanks.
You are from another planet, Mervin... Always few steps ahead in the future 🤯🤯🤯...
this is Phidata Team project th-cam.com/video/YMZm7LdGQp8/w-d-xo.html
super cool, I hope the next CPUs will run Llama 3 70B fast
It's funny in the film Her the fictional OS1 appears to take up the whole screen. It does later show documents and other UI elements, so I wonder if it's fair to call it an OS. I will say Apple and Microsoft need to get ahead of this and start allowing LLMs to control their desktops. My guess is both are working feverishly on this. In a few years you won't need to know how an email app like Outlook even works to send and receive email, or to create and update spreadsheets, your AI OS will handle that for you, just tell it what you want in there.
You could have shared a link to the original video: th-cam.com/video/6g2KLvwHZlU/w-d-xo.html
instead of recording a clone yourself.
Hm 😢
tbh i think mervin explained it better than me :)
if you havent noticed he does that a lot for most of his videos.
@@helix8847 im actually a big fan of that because he explains it much better than me :) hope he continues to do that. Mervin has a way of communicating complex information and i learn a lot about my own work when he makes a video
Would love to see a remotely accessible server addition to this setup to act something like open interpreter and the O1 lite.
nicee, thank you :-) can you please make a video on LLM OS on AWS?
I’m having problems using lm studio with this or groq
Hello, exporting my openai api key isn't working on the terminal, any tips on how to import it ?
can LLM OS be done with vectorshift?
there is a technical reason to choose phidata instead of langchain ?
very nice video, thanks Mervin
This is impressive
Amazing video. Thank you!
❤❤❤
Kendi görüntün çok büyük. Kodları göremiyoruz.
Can it run doom tho?
Legit question, why not just use the embed models like Nomic for example, chatting with my LLM I learned these vector "memories" create neural connection cells/nodes or whatever and it connects to these vector memories , meaning it's knowledge and memories sort of expands..
Is there any way i can use these assistants created by phidata in multi agentic frameworks like crewai or autogen?
Let’s rename an agent as OS and pretend it is novel…🤷🏻♂️🙈
Is openai the only api? Or can i use local llm to mimik openai api will it work?
🎉🎉👏👏👏
Phidata > Praison AI?
How does PhiData relate to CrewAI and PraisonAI? Would we use them all separately and independently? Or do they work together somehow? If they are independent, which do you recommend and why?
1. Start with crewAI
2. Find it was a waste of time
3. Move on with your life 😅
No need to blow your mind with more complex stuff to see this stuff provides zero value
CrewAI is perfect and a little bit illegal with how simple it is.
@@denisblack9897 What do you mean by, "CrewAI is perfect and a little bit illegal with how simple it is."
@@denisblack9897 I'm confused... are you saying CrewAI is too simple to do real work? Or are you saying it's amazing?
This is never going to work, since the LLM has to work from an OS as well. This is an OS on an OS. And it always needs lots of power.
depends on what you mean by ‘this’ 🤓
Must have never heard of virtualisatiin
Yes it’s true LLM’s have an OS under it in the same way an ATM or a parking meter have an app running on top of a operating system like windows embedded or embedded Linux but I think you missing the point . LLM’s are black boxes they have no contact with things outside their domain . LLM Os is a concept and Marvin has demonstrated how you can implement that concept and giving it hooks into your OS to access files and special agents . In this video Marvin uses an OpenAI api key which means you are making a api call to
Open Ai servers . So you could run this as is on a barebones Linux system with a WiFi or 5G connection and run it on raspberry pi or higher end beaglebone black .
If you were going to replace the OpenAI LLM component with an open source Llm like Grok or llama then you are correct you would need a lot more memory and compute power not to mention a gPU