Excellent work. You should do an official fork in Autogen, share the repo, and collaborate with your community. I'm implementing following your videos, but already created other specific agents, that I would gladly share back with your repo in a PR. I'm sure there are others here too. Cheers
You should release the code for each video. Don't worry about shipping stable versions. Make it easier for us to follow along and explore with you. Don't let the perfect be the enemy of the good. Thank you for making these videos!
I love how eloquent with the programming you are. All of the keyboard shortcuts and watching you code is like a symphony. I am about to listen through this at work. Can't wait!
this. I've been coding since the 90s, and this guy is the real deal on all metrics you just gave. he clearly knows his shit and then is a step above his equals.
ive been working on this problem as a research problem for some time now. i see it like this: persistent memory (outside the llms) communication "maps" that control the direction and inclusion of agents - like the trees you displayed a cache replay layer (to reduce costs) cost projections based on the replays (temp 0) Autogen gets us close, but it really just handles some of the communication redirection and gives us a place to put a memory manager of some type. By the way, your videos are great. Love the idea of using your hands. I was thinking of doing this until I saw you were doing it.
My 2p is that it needs another layer of abstraction between the user for that instance. "Look at the task, this is your team: XYZ. Devise a communication layer that will give the best performance output. Performance is defined as ABC. Give three examples to test". Then have a performance monitor agent whose job is only to evaluate, or perhaps evaluate an evaluation (I know, more layers, but that's what we do in the real world) of the Comms paths. Then make one of those agents a Teachable Agent (It's part of the Autogen Library). It will then learn what works (persistent memory, yay!), and then have it feed back in for future workflows. We *have* to get an abstraction layer that can produce metrics for evaluating output and measure against those metrics, otherwise we will have some very 'meh' outputs unless every scenario is hand coded, which kinda seems pointless given the tools we have at our disposal.
ive done this sort of thing with a genetic algorithm - it worked quite nice but I basically ended up with a corporate org chart anyway :D@@JonathanLuker
Something that works, but it’s extra work is to have the LLM describe each table and it’s purpose and embed that description. Than you query those embeddings against the query. Can’t wait to see a repo. Amazing work.
Just stumbled upon this gem of a channel! The depth and practicality of your autogen tutorials are unparalleled. It's baffling how underrated this channel is, especially when there are others out there riding the hype with clickbait and a stupid snake game. Yours stands out with its genuine value. Eagerly waiting for the rest of the series!
I would like explore some usecases of handling API calls, like creating a user through api or fetching some jobs through an API. Not directly connecting with Databases.
Did go through it, and I must say, big thanks to you. Absolutely legendary. I love how you peace things together. Subscribed on video 1, it just caught my attention right from the start. Respect !!
I love your in depth videos and I follow it with great interest as I plan to use Autogen as my production tool for my projects. Unfortunately I am not a programmer (first time using pycharm and working with a coding project with several files…) and I spend 2 night unsuccessfully trying to reproduce the orchestrator project you did. I would really love if I could get the hand on the py document and play with it. With my level of knowledge I only feel confident when I rewrite part of an already existing code to fit my need. Anyway thank you! I will continue to watch all your videos about autogen and might get some success after a while PS: just saw that you are addressing this at the end of the video 😅. So, thank you for sharing it in the future. I am learning to read it via your videos and it is actually much better than getting the code without being able to understand a single line of it!
Great content again. I see a fundamental flaw with the current implementation. The user needs to know the structure of the DB in order for the app to find the right tables and execute the correct queries. This has been demonstrated in this video. For a real life use case, the user would not be aware of the DB structure and the app would still need to deliver results.
YOU are the man i need to speak to, incredible content, im going to deep dive these videos while at work today, to make sure you dont already answer questions i have, but id love the opportunity to pick your brain briefly about a pretty sensitive project if you would be willing
This is the real serious, no non-sence, deep, working, tech stuff. REAL "value add", thanks a lot!! Not sure if you are working with David Shapiro on the "HAAS" Project.
Really good work! I think if you could manage to distribute between local LLM's and GPT-4 API in a way that gpt-4 does all the heavy "thinking" part and the local LLM does most of the standard work than it would be perfect
It’s fine to have ‘messy’ code. As long as it’s fast for you to explore and navigate, it’s the perfect form of code. Ignore negative comments, plan, build, code, observe, get fundings, iterate.
I came across this video a couple of days ago, and I couldn't watch it then and I was curious on this series you mentioned.. and well I lost it and couldn't find it in my search history.. because I didn't know what I was specifically looking for.. anyway, I finally found it again, I had in open in a random tab. :P
The LLM's are still quite useless with large MVC projects like BookStackApp/BookStack. How to make it to generate all from routes to models, controllers and VUE frontend components, for example, while following the project style and patterns used in it? The LLM should know the project like it's own pockets to do that..
Well I do have Autogen working with Local LLM's through LM Studio so technically speaking it's possible to do this kind of thing. I'm testing out some of the Mistral's right now to see how it goes. From what I'm seeing so far I'd say it's not quite ready yet but then there is a lot happening in this field every day so it may not take as long as it might appear.
very good job! I really like the way you set up things. I have a business question, specially beacuse you brought cost up. Is there anyway to use amazon bedrock instead of open api specially with autogen?
Running a local LLM is somewhat critical for cost managment with these types of things. I"m using LM Studio. then you just simply give autogen the url/ip and port to where that is running. I have my LM studio running on dedicated pc with a decent GPU . All of that said,.. the opensource models vs what GPT-4 can do is wildly different for various use cases.
I'm wondering how to integrate Autogen with something line Pinecone to give the agent team access to documentation / long-term memory. Your vector embeddings are the closest thing I've seen. Anybody have any ideas about this? (great content btw, miles ahead of other channels)
This same NL to SQL query shit could be done with half the effort using NL parsing like Spacy, no LLMs required. Oh, and then you would be actually able to debug it and keep it stable instead of wondering wtf is OpenAI gonna break behind the scenes with next release. But I forget, it would not be "agentic" then, just working.
I think you might be missing the point. This is meant to be an architecture, using PosgresDB as an example. This process is how we get useful LLM assistants for whatever work we're doing. The point is not PosgresDB, it's adaptability and quality of outcome, and the example of PG is there to give something to frame it around during the build.
@@JonathanLukerWell, I cannot see any adaptability or quality of outcome so far, I see a lot of effort and workarounds for inadequacies of this inefficient new technology with very little value coming out of it. But of course, as a hobby (defined as something you put more in than pull out), it's quite entertaining.
Excellent work. You should do an official fork in Autogen, share the repo, and collaborate with your community.
I'm implementing following your videos, but already created other specific agents, that I would gladly share back with your repo in a PR.
I'm sure there are others here too.
Cheers
Hi can you share your repo
You should release the code for each video. Don't worry about shipping stable versions. Make it easier for us to follow along and explore with you. Don't let the perfect be the enemy of the good. Thank you for making these videos!
Can share the repo ?
If you stayed until the end, you would answer your question yourself
I love how eloquent with the programming you are. All of the keyboard shortcuts and watching you code is like a symphony. I am about to listen through this at work. Can't wait!
this.
I've been coding since the 90s, and this guy is the real deal on all metrics you just gave. he clearly knows his shit and then is a step above his equals.
I havent started in this video, yet, but you deserve a big word of thanks man!!!
I love that you go deeper and share valuable insights!
Thanks for the videos. One suggestion to help the LLM find the correct tables is to annotate the table definitions. It helps tremendously.
ive been working on this problem as a research problem for some time now. i see it like this:
persistent memory (outside the llms)
communication "maps" that control the direction and inclusion of agents - like the trees you displayed
a cache replay layer (to reduce costs)
cost projections based on the replays (temp 0)
Autogen gets us close, but it really just handles some of the communication redirection and gives us a place to put a memory manager of some type.
By the way, your videos are great. Love the idea of using your hands. I was thinking of doing this until I saw you were doing it.
My 2p is that it needs another layer of abstraction between the user for that instance. "Look at the task, this is your team: XYZ. Devise a communication layer that will give the best performance output. Performance is defined as ABC. Give three examples to test". Then have a performance monitor agent whose job is only to evaluate, or perhaps evaluate an evaluation (I know, more layers, but that's what we do in the real world) of the Comms paths. Then make one of those agents a Teachable Agent (It's part of the Autogen Library). It will then learn what works (persistent memory, yay!), and then have it feed back in for future workflows.
We *have* to get an abstraction layer that can produce metrics for evaluating output and measure against those metrics, otherwise we will have some very 'meh' outputs unless every scenario is hand coded, which kinda seems pointless given the tools we have at our disposal.
ive done this sort of thing with a genetic algorithm - it worked quite nice but I basically ended up with a corporate org chart anyway :D@@JonathanLuker
Something that works, but it’s extra work is to have the LLM describe each table and it’s purpose and embed that description. Than you query those embeddings against the query. Can’t wait to see a repo. Amazing work.
You are a legend
Just stumbled upon this gem of a channel! The depth and practicality of your autogen tutorials are unparalleled. It's baffling how underrated this channel is, especially when there are others out there riding the hype with clickbait and a stupid snake game. Yours stands out with its genuine value. Eagerly waiting for the rest of the series!
I would like explore some usecases of handling API calls, like creating a user through api or fetching some jobs through an API. Not directly connecting with Databases.
Absolute legend, months ahead of the rest of the world. Decades at the 2022 speed of progress.
Did go through it, and I must say, big thanks to you. Absolutely legendary. I love how you peace things together. Subscribed on video 1, it just caught my attention right from the start. Respect !!
I love your in depth videos and I follow it with great interest as I plan to use Autogen as my production tool for my projects. Unfortunately I am not a programmer (first time using pycharm and working with a coding project with several files…) and I spend 2 night unsuccessfully trying to reproduce the orchestrator project you did. I would really love if I could get the hand on the py document and play with it. With my level of knowledge I only feel confident when I rewrite part of an already existing code to fit my need.
Anyway thank you! I will continue to watch all your videos about autogen and might get some success after a while
PS: just saw that you are addressing this at the end of the video 😅. So, thank you for sharing it in the future. I am learning to read it via your videos and it is actually much better than getting the code without being able to understand a single line of it!
Great content again. I see a fundamental flaw with the current implementation. The user needs to know the structure of the DB in order for the app to find the right tables and execute the correct queries. This has been demonstrated in this video. For a real life use case, the user would not be aware of the DB structure and the app would still need to deliver results.
YOU are the man i need to speak to, incredible content, im going to deep dive these videos while at work today, to make sure you dont already answer questions i have, but id love the opportunity to pick your brain briefly about a pretty sensitive project if you would be willing
What a great content you share with us. Thank you so much. Waiting for the next video
Share the WIP code 🤷🏻♂️
Your videos have a great mix of high level explanation and hands on practical implementation 👍
This is the real serious, no non-sence, deep, working, tech stuff. REAL "value add", thanks a lot!! Not sure if you are working with David Shapiro on the "HAAS" Project.
Really good work! I think if you could manage to distribute between local LLM's and GPT-4 API in a way that gpt-4 does all the heavy "thinking" part and the local LLM does most of the standard work than it would be perfect
It’s fine to have ‘messy’ code. As long as it’s fast for you to explore and navigate, it’s the perfect form of code.
Ignore negative comments, plan, build, code, observe, get fundings, iterate.
Having testing many, many, many open source models and autogen..... Yeah. Dan is correct - OpenAI's models murder vicuna, orca, wizardcode, etc
This is fire bro, exactly what i need. +1 subscriber for your hard working job
I came across this video a couple of days ago, and I couldn't watch it then and I was curious on this series you mentioned.. and well I lost it and couldn't find it in my search history.. because I didn't know what I was specifically looking for.. anyway, I finally found it again, I had in open in a random tab. :P
Hi Man thanks for your great work, your content is unique, thanks for sharing . As you know sharing is caring
Thank you for sharing your thought process.
THE GOAT. Very cool. Do you do consulting?
If you have to ask the price 😂. But...er, yeah I'd be quite keen to find out too...
The most important video on the internet. 1:36
Consider adding a KG database like neo4j or typeDB on top of your sql db to have more expressive queries between tables.
I came for the agent koolaid. I wasn't disappointed. 😊
Could we save a ton of tokens by not sending SQL results, just a compressed concise version or save it locally perhaps?
The LLM's are still quite useless with large MVC projects like BookStackApp/BookStack. How to make it to generate all from routes to models, controllers and VUE frontend components, for example, while following the project style and patterns used in it? The LLM should know the project like it's own pockets to do that..
Very very useful!
Fantastic! 🎉🎉🎉
Dynamite stuff 💣
Really useful video. Please do share the WIP code.
Well I do have Autogen working with Local LLM's through LM Studio so technically speaking it's possible to do this kind of thing. I'm testing out some of the Mistral's right now to see how it goes. From what I'm seeing so far I'd say it's not quite ready yet but then there is a lot happening in this field every day so it may not take as long as it might appear.
Can you share the code you wrote
very good job! I really like the way you set up things.
I have a business question, specially beacuse you brought cost up.
Is there anyway to use amazon bedrock instead of open api specially with autogen?
Yeah, I ran up $25 bucks in a couple of hours before implementing caching and replaying.
Running a local LLM is somewhat critical for cost managment with these types of things. I"m using LM Studio. then you just simply give autogen the url/ip and port to where that is running. I have my LM studio running on dedicated pc with a decent GPU . All of that said,.. the opensource models vs what GPT-4 can do is wildly different for various use cases.
ive yet to get the included examples to work with any model I have tried so far, have you?@@paulramos5732
Is this actually more performant than a react agent with tools?
Any way to optimze the embeddings, so you don’t have to know the exact table names?
This guy rocks 👏
Great Job
Can any of these work without openai key?
I'm wondering how to integrate Autogen with something line Pinecone to give the agent team access to documentation / long-term memory. Your vector embeddings are the closest thing I've seen. Anybody have any ideas about this? (great content btw, miles ahead of other channels)
Well, you can keep wondering because in the end all this shit is just copy-pasting text prompts across multiple ChatGPT sessions.
@@clray123 very helpful
We need a discord channel! ❤
This same NL to SQL query shit could be done with half the effort using NL parsing like Spacy, no LLMs required. Oh, and then you would be actually able to debug it and keep it stable instead of wondering wtf is OpenAI gonna break behind the scenes with next release. But I forget, it would not be "agentic" then, just working.
I think you might be missing the point. This is meant to be an architecture, using PosgresDB as an example. This process is how we get useful LLM assistants for whatever work we're doing. The point is not PosgresDB, it's adaptability and quality of outcome, and the example of PG is there to give something to frame it around during the build.
@@JonathanLukerWell, I cannot see any adaptability or quality of outcome so far, I see a lot of effort and workarounds for inadequacies of this inefficient new technology with very little value coming out of it. But of course, as a hobby (defined as something you put more in than pull out), it's quite entertaining.
People will do anything to avoid writing SQL 😂