I have seen many tutorials for Autogen. Almost all of them used the authors' examples. This is the first tutorial with original use case that I saw. Good job.
For anyone that is wondering, YES Microsoft Autogen can run on a Local LLM, you do not need it to use OpenAI. So you can get this whole system to run locally without any data being sent to the cloud, or paying a single cent for it. Got it running yesterday. Of course considerably slower on the local system.
Immediately subscribed. Very good content and the only real world example I have found using AutoGen yet. Glad you do this and I can't wait for the follow-up video where you allow more tables and columns in total in the database. Very good content. I am sure this is going to blow up by time :)
Gained a sub. Searched for autogen but everyone is just super long winded with no substance, almost feels like I’m hearing a script from a very repetitive LLM. You are straight to the point ignore the fluff but still give quick and short explanations for everything. Also ++++ want to follow along with the project because it sounds super interesting
Thanks so much for your great contents, really good job. If I can suggest, please avoid the full screen central captions which we also cannot turn off, they are are distractive and honestly , they make me anxious lol. thanks peace and love R.
Yeh I was going to suggest this also. It's fine for emphasising a specific important sentance now and then, but it's super distracting when it's one sentance after another for significant periods of time. I'm trying to look at what's being shown behind, but the rapidly changing text keeps pulling my monkey brain back to the middle practically involanterally 😅 Other than that it was a great and very informative video 😊❤
Great video, I’ve been looking at Agentic approaches and waiting for Agentic frameworks for o mature before bringing it more broadly to the attention of others in my company. This is a great state of play/how to video - Thanks!!!!
Thanks for this man, really perfect tutorial. Only feedback us to delete the subtitles in the middle, it is kinda annoying, at least for me. Thanks for the work
this is amazing, thank you very much! following line by line, one issue - seems aider is not working with openai api latest version (1.2.4) seems no upgrade from their side.
Great video and topic, agree about single-purpose micro agents. MS also released LIRA a dataviz agent I wonder if it can be used as AutoGen agent. Also OpenAI now have Assistants with custom functions is this the same approach? 🤔
Do you have a community or Discord? I have so many questions. In particular I can't figure out how to create my own function mappings, I think I'm just missing a simple step.
Can we use a different Large Language Model which doesnt have any rate limit and is free, maybe something like mistral or llama that can be running locally and we can make use of the Autogen technology on top of that?
Very interesting video, but how do you manage to not get the "openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization" ? I keep getting that error when trying to use Autogen
Interesting framework. It definitely needs more work especially on validating an output, right now it's printing and looking for Approved without doing any actual validation. I liked the pros and cons discussion at the end. Finally I would subscribe but the midtext closed captions and the moving hands in the backend were too distracting for me when looking at code, and I mean this with all the positive feedback :)
Impressive demonstration. I would love to set up multi agents to handle some complex database tasks. What scares me a bit is the fact that we are looking into 5 billion records. Is there a way to make sure that gpt4 wont process any data himself? As long as its safe that all operations are handeld by the database it would be fine but as soon as data gets loaded into the chat it would be an extremly expensive tea-party...
Love this and I have some ideas. Have you had any luck with using AutoGen with any open source models? I would love to try to use trained opensource models on my PC and then use GPT for the final output. Plus it would save on costs as well.
15:45 interesting, how it decides to send that directly to Senior Data Analyst and not to anyone else in a group chat. Is messages sent to all members of the chat at once? Don't see the output of all others members of the chat. Or that APPROVED is a unified answer from all others members of the chat? How much LLM calls involved? Would be nice to see some statistics of how much calls/tokens used over the session.
Why do you flash every word you say in the middle of the screen? I can’t watch this. I’m going to have a seizure. Unfortunate because it seems like a great video. Keep up the amazing work. ❤
I wish there was a way to disable the flashing text in videos. I can't understand how much I hate this trend. It's distracting and makes it very difficult to look at anything but the text.
So perhaps I am confused about how all this AI and autogen should work. Why are you writing so much code to do this? I thought the idea here was you basically prompt it in a simple manner and the autogen handles the creation of the different agents, does all the work and gives you a quality response back. It seems like you're doing a ton of coding to get this to work. Is the idea that you would be a coder at a company that wants to integrate AI in to your product, and thus this code is an example of what you would be doing to use AI to integrate in to your product? You're hard coding the prompts in code, for example. Seems like those should all be dynamic at runtime.
We needto stop using th eterms Prompt Engineering, it is not Engineering, ijust as Google Search is not Search Engineering, even though I have become quite expert at it. Engineering requires the understanding of the underlining technology of such tasks, as a bridge Engineer needs to know how structures and materials behave, or Electronic Engineer must understand the behaviour of Electricity. Some idiot with no intellegence can start doing some prompting, and beleive themselves to be engineers can cause some real damage.
Please, you don't need to put each word you're saying on the screen individually. People can use the cc button on TH-cam and read entire phrases and sentences smoothly. This one-word-at-a-time thing people do these days is extremely jarring and painful to watch. Not to speak of distracting from the rest of the video.
I have seen many tutorials for Autogen. Almost all of them used the authors' examples. This is the first tutorial with original use case that I saw. Good job.
I have the video playing, just started. I was hoping that would be the case of a new application of this!
@@93cuttyI didn't get it.
@@atchutram9894 what didn't you get my man?
@AIJasonZ builds one that works as a discord mod that its kinda nuts too.
100% agree! Pretty incredible stuff!
For anyone that is wondering, YES Microsoft Autogen can run on a Local LLM, you do not need it to use OpenAI. So you can get this whole system to run locally without any data being sent to the cloud, or paying a single cent for it. Got it running yesterday. Of course considerably slower on the local system.
Immediately subscribed. Very good content and the only real world example I have found using AutoGen yet. Glad you do this and I can't wait for the follow-up video where you allow more tables and columns in total in the database. Very good content. I am sure this is going to blow up by time :)
This is the right place to learn about autogen. I'm fascinated by the blogger.
Dude I’m watching this series like a hawk 🙏🏽💎
Best auto Gen video yet. Looking forward to more!
Great demo, you just opened a bunch of new ideas for me. Much appreciated.
Thank you for posting a real world example
Honestly your videos are extraordinary! please keep em coming, you’re opening a lot of eyes.
No problems here. 1 prompt to rule them all. Works for me.
well done mate! extremely good quality video, thanks a lot!
Gained a sub. Searched for autogen but everyone is just super long winded with no substance, almost feels like I’m hearing a script from a very repetitive LLM. You are straight to the point ignore the fluff but still give quick and short explanations for everything.
Also ++++ want to follow along with the project because it sounds super interesting
Really inspired by your architect.
Fascinating, you friend have got yourself a subscriber
This tutorial is just great. Can't wait for the next part.
I'm reluctant to adopt any of these fly-by-night frameworks. Who needs a framework, really, anyway? This just sounds like...OOP.
Very neat and well constructed. Thanks for sharing!
Powerful agents .. Very intersting. Thx 🙏🏻
Thanks so much for your great contents, really good job. If I can suggest, please avoid the full screen central captions which we also cannot turn off, they are are distractive and honestly , they make me anxious lol. thanks peace and love R.
Yeh I was going to suggest this also. It's fine for emphasising a specific important sentance now and then, but it's super distracting when it's one sentance after another for significant periods of time. I'm trying to look at what's being shown behind, but the rapidly changing text keeps pulling my monkey brain back to the middle practically involanterally 😅
Other than that it was a great and very informative video 😊❤
I'm so glad you did your own example. it helps
Great video, I’ve been looking at Agentic approaches and waiting for Agentic frameworks for o mature before bringing it more broadly to the attention of others in my company. This is a great state of play/how to video - Thanks!!!!
Thanks for this man, really perfect tutorial. Only feedback us to delete the subtitles in the middle, it is kinda annoying, at least for me. Thanks for the work
these professional talking hands really know their shit
Another good video, thanks!!
this is amazing, thank you very much! following line by line, one issue - seems aider is not working with openai api latest version (1.2.4) seems no upgrade from their side.
Very nice! keep them coming.,
Fixing this with my company too :)
If you show GPT-4 costs, we decide if this system "creates value while you sleep" or "enriches OpenAI while you sleep"
great work 👌
Great video and topic, agree about single-purpose micro agents. MS also released LIRA a dataviz agent I wonder if it can be used as AutoGen agent. Also OpenAI now have Assistants with custom functions is this the same approach? 🤔
Do you have a community or Discord? I have so many questions. In particular I can't figure out how to create my own function mappings, I think I'm just missing a simple step.
Really good video! 👏🏾👏🏾👏🏾🏆🏆🏆
Could we have more this-look like tutorials for real world business appliance.
Is this possible with Claude Haiku? For less expense
Can we use a different Large Language Model which doesnt have any rate limit and is free, maybe something like mistral or llama that can be running locally and we can make use of the Autogen technology on top of that?
Very interesting video, but how do you manage to not get the "openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization" ? I keep getting that error when trying to use Autogen
Interesting framework. It definitely needs more work especially on validating an output, right now it's printing and looking for Approved without doing any actual validation. I liked the pros and cons discussion at the end. Finally I would subscribe but the midtext closed captions and the moving hands in the backend were too distracting for me when looking at code, and I mean this with all the positive feedback :)
Amazing work thank you 🙏
Can you do a beginners guide for autogen?
Impressive demonstration. I would love to set up multi agents to handle some complex database tasks. What scares me a bit is the fact that we are looking into 5 billion records. Is there a way to make sure that gpt4 wont process any data himself? As long as its safe that all operations are handeld by the database it would be fine but as soon as data gets loaded into the chat it would be an extremly expensive tea-party...
Does it work with local llm
can auto gen use local or private GPT? If not no company would favour this solution and flow private data into openai servers....
Any ideas how to scale this to real time use cases with a UPL requirement less than 100 ms?
Do you have github link from this video pls
how is 'function_map' implemented? I didn't see it in the vid
Great video thank you
Love this and I have some ideas. Have you had any luck with using AutoGen with any open source models? I would love to try to use trained opensource models on my PC and then use GPT for the final output. Plus it would save on costs as well.
Yep already done it but your gonna need 128gb of ram…
@@perc-ai so a 4090 with 64 gigs of ram still isn't enough? Damn
it might not be enough for falcon180b parameters. Its not all about the gpu for local llms, it needs serious ram. @@93cutty
How do you deal with running into API limits?
where do I find the code used in this video
what we do in case when we have multiple tables
Does this work with chatgpt4?
is there a github repo i can pull?
15:45 interesting, how it decides to send that directly to Senior Data Analyst and not to anyone else in a group chat. Is messages sent to all members of the chat at once? Don't see the output of all others members of the chat. Or that APPROVED is a unified answer from all others members of the chat? How much LLM calls involved? Would be nice to see some statistics of how much calls/tokens used over the session.
Bro spends 1 day to save himself 10 minutes of writing SQL statements.
Can you pls link the github code pls 🙏 I want to use for my project 🙏 🎉
Why do you flash every word you say in the middle of the screen? I can’t watch this. I’m going to have a seizure. Unfortunate because it seems like a great video. Keep up the amazing work. ❤
You need to evaluate your results. This isnt a true “agent” its just gp4 with hands.
what bout the token costs?
pretty much a function calling framework
I already see that AI is generating 0.0001% of time that statement 😁:
"DROP ${DATABASE}"
Guess that some additional safe filtering is required.
the agent would have read only credentials.
The voice-to-text in the middle of the screen makes my eyes bleed. Otherwise, awesome video. 😂
I wish there was a way to disable the flashing text in videos. I can't understand how much I hate this trend. It's distracting and makes it very difficult to look at anything but the text.
It does not only run on gpt4
So perhaps I am confused about how all this AI and autogen should work. Why are you writing so much code to do this? I thought the idea here was you basically prompt it in a simple manner and the autogen handles the creation of the different agents, does all the work and gives you a quality response back. It seems like you're doing a ton of coding to get this to work. Is the idea that you would be a coder at a company that wants to integrate AI in to your product, and thus this code is an example of what you would be doing to use AI to integrate in to your product? You're hard coding the prompts in code, for example. Seems like those should all be dynamic at runtime.
Stopped watching after a minute because of the horrible subtitles in the center. Word by word not sentences. Bad idea.
@indydevdan plz
I disagree
Good feedback. Great video :) TH-cam does great auto subtitles if your audio is clear - which yours is crystal!
100%. Totally unnecessary and distracting
Wow, what a douche canoe
😂 you created an AI beurocracy to do something that a simple prompt was already doing faster.
i kind of hit the wall :/&
lil bro this looks longer than me coding it and more error prone, i really hope this isnt the future of coding. Looks so borring.
STOP
PUTTING
EVERY
WORD
ON
THE
SCREEN!
Drives me CRAZY!!
So obnoxious.
We needto stop using th eterms Prompt Engineering, it is not Engineering, ijust as Google Search is not Search Engineering, even though I have become quite expert at it. Engineering requires the understanding of the underlining technology of such tasks, as a bridge Engineer needs to know how structures and materials behave, or Electronic Engineer must understand the behaviour of Electricity. Some idiot with no intellegence can start doing some prompting, and beleive themselves to be engineers can cause some real damage.
sure buddy
Gpt4 has an iq of 152 your a peon compared to it man lol. Show some respek
Please, you don't need to put each word you're saying on the screen individually. People can use the cc button on TH-cam and read entire phrases and sentences smoothly. This one-word-at-a-time thing people do these days is extremely jarring and painful to watch. Not to speak of distracting from the rest of the video.
sub titles in the middle of the screen ... NO
Or you could just learn how to code....
Bro 💀you don't need AI for this.... fking hell
How does it compare to langchain?