*UPDATE:* Thanks to the viewer @tryingET great suggestion, I managed to improve the prompts and make the output consistent. You can check out the improved version on my github. Thanks for watching and I'm curious, what were your experiences with crewAI like?
Thanks for this deep dive! The problem with open source templates is that they don't handle function calls, which is necessary for the crew to function. It seems that OpenHermes handles it well, the scripts work as expected, but even gpt3.5 gives better results.. thanks again for sharing
@@tryingET This was just a great suggestion! I just ran the script couple of times, and you're right, the results are much more consistent. I only changed the part with "(linkToURL)", for some reason it was throwing the agent off. But it works with simple "(link to project)". I'll update the repo, thanks a lot for this help 🙏🏻
crewAI creator here, so cool to see videos like this! I also automated parts of my work with crewAI as well and it's a "a-ha" moment for sure! Great content! keep it up 💪
is it possible to run them in parallel instead of serial (maybe via threads)? The idea is that there's a manager who of course manages the worker agents (researchers, writers, ...); the worker agents then hand-off their work to analyst agent(s) who determine if more work is to be done, the result is then handed off to the manager, who then hands off to-do work to task creator agents to define what else needs to be done based on metrics from the analyst(s), then brought back to the manager who then assigns tasks to agents based on available workload (agent workload queue would be cool). also, dynamically "spawning" (instantiating) agents based on needs would be cool also, to conserve resources. maybe some features in crewai are missing yet to do that - what do you think?
When humans take their organic brain inside their skulls for granted... Even if we could reach Singularity, we will always be n-1 away from the 'n'th civilization that Created our n-1 universe. This is the fundamental limitation of pixel based evolution. Hence, we would go a longer way being organic hackers than materialists.
I hardly comment on TH-cam channels. But this is another level. The way you explain things, organization, referencing, and pace spot on. Great content. Please keep up the great work. Thanks :)
I'm relieved to find someone else who faced challenges while running local models. Your sincere and practical review is appreciated. Unlike many others who simply join the hype train without discussing their struggles, your honesty is refreshing. Thank you.
me too, local models beside being pain in the butt, freezing your whole damn dev environment, then you get a shitty output, i am surprised she did test many models, i would have givenup much faster.
Not enough people talk about this. I follow the steps and run into error after error. Versions don't match, missing dependencies, list goes on. Ai development really takes a toll on your life force lmao
I only tested one model weeks ago and i got no issue installing it and running it. It’s uncensored and i was curious. Outputs were great. If you have at least 16gb ram on 7B models there’s no way it will crash your PC. Of course it’s slow as fuck, like 1 word per second generation
First time i encounter your channel/videos, and immediately got a subscribe from me. I love the clear explanation, straight to the point, no over promising or "Make $3000 dollar a day with these 10 simple bla bla". Thank you for keeping things real and useful. Breath of fresh air!
This mirrors my experience with local models vs automation. I've come to the conclusion I either need to massively upgrade my hardware or just wait it out for a new breakthrough model. I'm a bit jaded with all the hype that never seems to live up to real-world use.
Welllllll, yes and no. I've been able to tripple my productivity and I got 100% on two seperate essays using AI to workshop some ideas and build the essay outlines.
It's shocking to me how quickly someone made an AI to do this, as I created my own autonomous agents in python a few months back to do these similar things. One tip I have for people trying to come up with a large & detailed tasks/descriptions is to write it in a .TXT file, and then reference it in your code. That way it keeps the code clean and also easy to modify the descriptions and tasks in the future without changing anything in the code.
Excellent idea! You could even create services to update it from a GUI without touching your code. I'd probably set it up like a traditional JSON config file.
Maya, have to say - that was in-depth. Love the detail. Expensive running GPTs with CHAT 4. The output is definitely worth it though. I guess getting your own custom newsletter every day for less than a dollar does save research time. The next step is to get that file data into the actual newsletter now. Cheers for the free resources on Langchain. Just getting into APIs with py and deployment apps. I predict that you will have a great future on TH-cam. Keep up the good work.
Super work there Maya. You earned a subscriber, and a follow. I built an agent on node with run tools with custom functions over RAG. But this is next level only, will try this next. Thanks again. Keep shining
Really nice work friend. Nice narrative style and prosody while getting such a structured goal. And definitely awesome to listen discussions on the topics and decision making, this marks the difference. ... For trivial coding there is AI and Internet... for the core reasons and concepts there is us the humans
Thanks for the refreshingly honest results rather than the usual fake hype. It looks to me that LLMs have a long way to improve before autonomous agents can become actually useful.
This video is fantastic! The content is thoroughly explained and incredibly helpful for professionals in the IT space. At Xerxes, we truly value such clarity and effort. Keep up the great work-looking forward to more insightful content like this!
Side note, when showing something like the “ai agent landscape” would be neat if there was a reference where to find it. (There was enough info to do so, but a side of the repo would be sweet)
This is a great video!! THANK YOU! What about fine-tuning a local model to perform better by training individual agents? Copywriter: trained on how to write effective copy Proofreader: trained to review the copy for edits and suggestions Project Manager: review work against requirements Marketer: brings in marketing experience for evaluating concepts Researcher: skilled in researching against the requirements and working with the marketer. Risk Manager: identifies risks and how to mitigate them Venture Capitalist: reviews the project and provides feedback on how to get funding. Can you have a router with crewAI? Starts with PM to scope the work, assign tasks, and validate deliverables.
I had the same idea about fine-tuning LLMs for specific tasks. Since the agents are created with prompts (afak), I think it makes sense to fine-tune LLMs for those prompts. Maybe there is already a dataset for that. If someone knows about it, please share where it is.
I want to thank you for this video! It is one of the most informative videos I have seen. Now that I watched it through I realized you really did your homework. I appreciate it.👏
Explaining someone with an example is tough part, best part is you got an excellent example with a wonderful pace to explain things to your viewers. Keep rocking.
It's quite possible that GPT4 is much more adept at understanding the premise of function calling, as it likely has a fine tuned expert in it's MOE to deal with "GPTs", thus making it more capable when dealing with OOTB solutions like CrewAI et al. I'd hazard that until someone fine tunes an OS model with a variety of function calling methods, and tools like CrewAI move on to more dynamic conversation flows rather than just sequential, then we'll begin to see the benefits of offline muilti-agent setups.
Many thnakns, you just saved me a kot of time trying to figure out if compound model with agents will solve a particular problem. I have basic python knowledge and that was going to eat a lot of my time. So thank youuuuuuu! ❤
Their efficiency in handling tasks like data processing and research is astounding. Have you ever attempted to coordinate several AI agents with SmythOS?
Impressive video. The Reddit scrapper, thinking about changing ollama settings :D I'm sad your system limited your choice of LLM. But now I'm really motivated to try on my system, to test Mixtral.
Watching from BerylCNC... what's your opinion on using CrewAI or AutoGen versus creating a GPT within OpenAI, and providing instructions that frame out the functionality in a similar way? My development work is mostly related to CNC tool path utilities. LLMs do a poor job of inferring and understanding geometry, so I have to bake in a lot of rules and math. GPTs seem like a cool way to get noticed, but I really need to include libraries and Python code. One of ours is called "Beryl of Widgets", and it helps makers figure out what to make and sell with the tools they have available. It could be so much more with CrewAI, I think, but then I need a way to deploy it. Great content, thank you!
This is like a book in one video, thank you so much! Just curious, but how would you compare the latest autogen studio to crew ai? Lots of wonderful ideas here and beautifully presented, thank you so much for publishing this, you are indeed a knowledge sharing master and the world needs more intellectual contributions like this. Thanks again!
Hello, just found your channel. I was expecting some mediocre video under a somewhat clickbait title, but I quickly realized this was some actually interesting content, and I am quite impressed by this thing. will definitely give it a try, although I can already see the terrible social and economical impacts of using it in the real world in enterprises. PS: at the end, it sounds to me like what you did to get your result was overfitting.
Yes well said definitely got my attention even clicked the bell for the fourth time ever. Funny intro then actual genius-like content behind it. This channel must be AI already that's the only possibility.... ;]
Thank you for your great research and video. Concerning Daniel Kahneman´ system 1 and system 2, the French neuroscientist Olivier Houdé proposes a continuation of this theory, based on the latest neurology discoveries. He published a short book called “L’Intelligence humaine n’est pas un algorithme.” that is easy to read and understandable, and as it is short, it might be easy to translate to English with an LLM.
Hi Maya! I’m so fascinated by your technical abilities. I barely understand the whole thing, but I’ve always been fascinated by AI and started learning AI tools.I saw on your tiktok that you taught yourself Python. It would be awesome if you can also share your learning process coming from a non-tech background and how’s your progress so far. Thank you :)
On 8GB VRAM I had no problem running 10B or 13B models, however I run Q5 gguf. On 24GB VRAM I am able to run 70B Q4 gguf. For slow tasks it's acceptable speed.
Thank you for the video. At the moment, I am experimenting with Crewai and Autogen (it uses cheaper GPT4 turbo) - these tools are improving every month. In practice, I still achieve better results when I closely collaborate with LMMs - but who knows, in 6-12 months it might be possible to fully automate my workflows.
thanks for the feedback! that's interesting, I also can only automate parts of my work that require processing big amount of data. but who know what's going to be possible in 6-12 months!
@micbab-vg2mu Could you share insights into where you create your workflow for optimal results? I'm curious to know if you have any specific advice or insights for optimizing your workflows with LLMs? Any tips you can share would be appreciated!
Hello, could you share your thoughts about crewAI vs Autogen? Which one provides better results? Maybe which one is simpler to use? Or which one gives more opportunities?
Adam - obie metody są bardzo prose w użyciu ( nie mam wykształcenia IT i daje radę). Jeśli planujesz open source - rekomenduje CrewAI jeśli GPT4 to Autogen2. Mimo że workflowy nie sa perfekcyjen to wart je znać - )@@aszmajdzinski
Would be interesting to see if combining MemGPT with one of these LLMs might help as your problem is most likely a teeny tiny context window - it may be your instructions are getting lost when combined with all the data taken from the tools. I believe the creator of CrewAI is looking into this
My understanding is that using a larger quantized model works better. I’m planning on trying it soon on my computer, maybe with autogen. I’ve got a 4090, i9-10900k, and 64Gb RAM, so I’m hoping I can run maybe a 30b quantized model on it. I read that the ~5-bit quantized models are the sweet spot that reduces your memory footprint without any significant loss in quality of responses. 4-bit is still good but takes enough of a hit to matter. Again, haven’t tried it myself, so maybe I’m mistaken, but that’s what I read.
hej kids I have to say, you are just amazing. I am 42, when I grew up we did not have even computers (I am from eastern europe - so first comp in family was for bussines, and it was really old - first phone i got when i was 16 and it was nothing like todays phones and so on) but I do not care how old I am, or if we had these technologies available back then, because we have it now. And thanks to creators like you I can just try this and be amazed. i do not think that you know how magical this stuff seems to someone who did not grew up in this era, but you are creating pure magic :) thank you for letting me see behind the curtain and just to try :) have a lovely day
Thank you so much for lovely message :) fellow Eastern European here - at “computer science “ classes we had ancient and barely working computers and the peak of education was making a power point presentation about printers and cartridges 😅 I’m so glad you like the video and comments like this make me feel so fulfilled and motivated to work harder 💪🏻
I just started to learn about AI and.. I barely understand whats going on here, but I'm fascinated :). I was thinking about possibility to create this kind of agents/assistants for tasks like searching informations about specific topic online. I will follow you :),
🎯 Key Takeaways for quick navigation: 00:41 🧠 *The video discusses the differences between System 1 (fast, subconscious thinking) and System 2 (slow, deliberate thinking) in the context of AI capabilities, highlighting that current language models primarily operate on System 1 thinking.* 01:48 💡 *Introduces two methods to simulate System 2 thinking in AI: "Tree of Thought" prompting and the use of platforms like CrewAI, which enable the construction of custom AI agents capable of collaborative problem-solving.* 03:26 🚀 *Outlines the process of setting up AI agents using CrewAI, emphasizing the importance of defining specific roles, goals, and backstories for each agent to ensure effective task execution.* 07:22 📈 *Describes how AI agents can be made more intelligent by granting them access to real-world data, and how to avoid fees and maintain privacy by running models locally.* 14:31 💸 *Discusses the cost implications of using models like GPT-4 for AI-driven tasks and explores local models as a more cost-effective and private alternative, despite their varying performance and capabilities.* Made with HARPA AI
I would be interested to know how the local models performed if each model which was adeuqate at each task was used. (ie... using 3 models instead of just one). First video I have watched from you -- very professionally laid out and will be sure to SMASH that Subscribe button.
So these agents requires something called "Function Calling" in LLM which is enabled in GPT-4. That's why open source models didn't perform well, but I think models that are fine tuned for agents and function calling will do better. Worth a try!
To deal with variability for easy-to-identify properties like "Does it have a link?" You can have code that reruns the model if it finds less than N links in the output. You could even save a little money by checking before the whole output has been written and aborting the current run.
Sometimes it's faster and more efficient to do the work (write the business plan or blog) yourself as a human rather than spending time and programming AI to do it. But I'm old, with an efficient creative mind. But, good video.
*UPDATE:* Thanks to the viewer @tryingET great suggestion, I managed to improve the prompts and make the output consistent. You can check out the improved version on my github.
Thanks for watching and I'm curious, what were your experiences with crewAI like?
Thanks to You I will start this in my home lab :) btw good job, waiting for more content. I see ring on finger, that's mean it's to late?
Top ASMR experience👌
Thanks for this deep dive! The problem with open source templates is that they don't handle function calls, which is necessary for the crew to function. It seems that OpenHermes handles it well, the scripts work as expected, but even gpt3.5 gives better results.. thanks again for sharing
i have to agree @@TheGalacticIndian
@@tryingET This was just a great suggestion! I just ran the script couple of times, and you're right, the results are much more consistent. I only changed the part with "(linkToURL)", for some reason it was throwing the agent off. But it works with simple "(link to project)". I'll update the repo, thanks a lot for this help 🙏🏻
crewAI creator here, so cool to see videos like this! I also automated parts of my work with crewAI as well and it's a "a-ha" moment for sure! Great content! keep it up 💪
Wow I was trying to find your video my guy… someone posted a tutorial with your video and didn’t link you, great videos @BuildNewThings
Great program! Any chance we will be able to use memgpt with it?
is it possible to run them in parallel instead of serial (maybe via threads)?
The idea is that there's a manager who of course manages the worker agents (researchers, writers, ...); the worker agents then hand-off their work to analyst agent(s) who determine if more work is to be done, the result is then handed off to the manager, who then hands off to-do work to task creator agents to define what else needs to be done based on metrics from the analyst(s), then brought back to the manager who then assigns tasks to agents based on available workload (agent workload queue would be cool). also, dynamically "spawning" (instantiating) agents based on needs would be cool also, to conserve resources. maybe some features in crewai are missing yet to do that - what do you think?
Huh wat? The video eventually comes to the conclusion that the results are useless and basically a waste of time. ☹️
@@themax2go not at the moment it only does sequential, maybe pair it with autogen
This is by far the most clear explanation I've found on agents, how to use them and how to run them locally. Congrats!
Literally every LLM video
Title: "I automated everything"
Video: "Wow no model can understand the task"
Still a long way to go!
When humans take their organic brain inside their skulls for granted...
Even if we could reach Singularity, we will always be n-1 away from the 'n'th civilization that Created our n-1 universe. This is the fundamental limitation of pixel based evolution. Hence, we would go a longer way being organic hackers than materialists.
@hidroman1993 A little late to the party. Has it gone any better by now?
@@syedirfanajmalNo, lol
I hardly comment on TH-cam channels. But this is another level. The way you explain things, organization, referencing, and pace spot on. Great content. Please keep up the great work. Thanks :)
I'm relieved to find someone else who faced challenges while running local models. Your sincere and practical review is appreciated. Unlike many others who simply join the hype train without discussing their struggles, your honesty is refreshing. Thank you.
Try ollama or gpt4all anyway to run a SOTA model you will need gpu or apple silicon.
me too, local models beside being pain in the butt, freezing your whole damn dev environment, then you get a shitty output, i am surprised she did test many models, i would have givenup much faster.
Not enough people talk about this. I follow the steps and run into error after error. Versions don't match, missing dependencies, list goes on. Ai development really takes a toll on your life force lmao
I only tested one model weeks ago and i got no issue installing it and running it. It’s uncensored and i was curious. Outputs were great. If you have at least 16gb ram on 7B models there’s no way it will crash your PC. Of course it’s slow as fuck, like 1 word per second generation
@@VizDopeyou assume you have nothing else running dude, how do you develop this way? you need at least 64GB just to feel something!
First time i encounter your channel/videos, and immediately got a subscribe from me. I love the clear explanation, straight to the point, no over promising or "Make $3000 dollar a day with these 10 simple bla bla". Thank you for keeping things real and useful. Breath of fresh air!
Dr CC cfv CC cccc denn cr FFM ggf so XXL ref für du v_''_& cv
This mirrors my experience with local models vs automation. I've come to the conclusion I either need to massively upgrade my hardware or just wait it out for a new breakthrough model. I'm a bit jaded with all the hype that never seems to live up to real-world use.
totally agree
Welllllll, yes and no. I've been able to tripple my productivity and I got 100% on two seperate essays using AI to workshop some ideas and build the essay outlines.
We already have "Agency Swarm" in our business so there is no need for CrewAI or Autogen for sure. Try it out.
It's shocking to me how quickly someone made an AI to do this, as I created my own autonomous agents in python a few months back to do these similar things.
One tip I have for people trying to come up with a large & detailed tasks/descriptions is to write it in a .TXT file, and then reference it in your code. That way it keeps the code clean and also easy to modify the descriptions and tasks in the future without changing anything in the code.
Excellent idea! You could even create services to update it from a GUI without touching your code. I'd probably set it up like a traditional JSON config file.
How to use .txt from my laptop and feed it as an input to the llm?
please guide
@@DAN_1992 I don't know the answer, but could you just type that question into ChatGPT and it'll tell you?
Good Job Maya :) I thought about many ideas while watching your video!
Am I the only one who appreciates a good educational video that has no overly hyped reactions?
Love this. Really transparent and talking about limitations of models..
Maya, have to say - that was in-depth. Love the detail. Expensive running GPTs with CHAT 4. The output is definitely worth it though. I guess getting your own custom newsletter every day for less than a dollar does save research time. The next step is to get that file data into the actual newsletter now. Cheers for the free resources on Langchain. Just getting into APIs with py and deployment apps. I predict that you will have a great future on TH-cam. Keep up the good work.
Very good and clear explantion. I rarely comment but when i do its because is worth it. So GPT-4 was the best model for all the tasks.
This is awesome! Building a team of AI agents that can access real-world data sounds incredibly powerful.
Damn, you really are DOING THE WORK and then reporting back to us for free, dude !
Thanks so much for such gem. Much appreciated !
Huge thanks for such detailed, well structured and illustrated information! The best video I’ve watched on AI so far.
Amazing video thanks for the insights.
One of the best, honest reviews - love this! Thank you!
everybody can do it.. then opens a terminal and starts typing 😂.. loving this video already.. can’t wait till we start pip installing stuff
some encouragement is not bad ;) but also, the creator of crewai is already working on UI, so hopefully, people won't be pip installing for too long
😂 yea that install she skipped at the beginning would have been helpful for people like me, luckily I have GPT4 😅
Great video, Maya! Please keep creating more valuable content about agent creation.
Super work there Maya. You earned a subscriber, and a follow. I built an agent on node with run tools with custom functions over RAG. But this is next level only, will try this next. Thanks again. Keep shining
I used this same method only through Google apps script and integrated into spreadsheets. Nice work
That sounds interesting. I know Apps script better than Python and I want to work with sheets. Would you share your work, or some of your insights?
@@lausianne sure. I have a few videos on my channel and if you shoot me an email, I can share the code I used
Really nice work friend. Nice narrative style and prosody while getting such a structured goal. And definitely awesome to listen discussions on the topics and decision making, this marks the difference. ... For trivial coding there is AI and Internet... for the core reasons and concepts there is us the humans
Thanks for the refreshingly honest results rather than the usual fake hype. It looks to me that LLMs have a long way to improve before autonomous agents can become actually useful.
Excellent explainer... Congrats and keep going !!!! Hello from Dominican Republic.
Thank you for your clear and lucid explanation about CrewAI !!!
This was an awesome video Maya. Thank you so very much for the wonderful and very helpful information! 🙏
I had some unsalted Pringles, last week. This week, I had a Four Loko Gold
That's the most mental use of a boom arm I've seen anywhere.
😆
Commented for creativity, and the engagement boost.
Appreciate the way you've explained difference with analogy of "Thinking, Fast and Slow"
This video is fantastic! The content is thoroughly explained and incredibly helpful for professionals in the IT space. At Xerxes, we truly value such clarity and effort. Keep up the great work-looking forward to more insightful content like this!
This is incredible. Thank you so much for sharing. Very inspiring!
long time no see ... A lot has happened since . i was expecting a video.
You look like a 500k+ suscribers's creator, great job and great video btw
thanks a lot :)
I’m not a coder but want to learn how to build and tinker with these. Thanks for the clear explanations!
Side note, when showing something like the “ai agent landscape” would be neat if there was a reference where to find it. (There was enough info to do so, but a side of the repo would be sweet)
good callout, thanks! just included the link in the description box!
This is a great video!! THANK YOU!
What about fine-tuning a local model to perform better by training individual agents?
Copywriter: trained on how to write effective copy
Proofreader: trained to review the copy for edits and suggestions
Project Manager: review work against requirements
Marketer: brings in marketing experience for evaluating concepts
Researcher: skilled in researching against the requirements and working with the marketer.
Risk Manager: identifies risks and how to mitigate them
Venture Capitalist: reviews the project and provides feedback on how to get funding.
Can you have a router with crewAI? Starts with PM to scope the work, assign tasks, and validate deliverables.
I had the same idea about fine-tuning LLMs for specific tasks. Since the agents are created with prompts (afak), I think it makes sense to fine-tune LLMs for those prompts. Maybe there is already a dataset for that. If someone knows about it, please share where it is.
I haved learned few very important things from your video. Thank you for an amazing video 🙏
I want to thank you for this video! It is one of the most informative videos I have seen. Now that I watched it through I realized you really did your homework. I appreciate it.👏
Really good foundational information and great content. Thank you!!
INCREDIBLE CONTENT. You just got a new follower
Thanks for testing local AI models, I had a similar experience
Explaining someone with an example is tough part, best part is you got an excellent example with a wonderful pace to explain things to your viewers. Keep rocking.
this is so in-depth and really appreciate your hard-work and dedication Maya.
It's quite possible that GPT4 is much more adept at understanding the premise of function calling, as it likely has a fine tuned expert in it's MOE to deal with "GPTs", thus making it more capable when dealing with OOTB solutions like CrewAI et al. I'd hazard that until someone fine tunes an OS model with a variety of function calling methods, and tools like CrewAI move on to more dynamic conversation flows rather than just sequential, then we'll begin to see the benefits of offline muilti-agent setups.
Ha never heard anyone mentioning thinking fast and slow anywhere that is a great book
You just won a New Subscriber, great video 🎉
Wow. Thank you for such a great video and for sharing these insights. Really good.
Absolutely Stunning Maya, Thanks for sharing these golden information.🤩
I just discovered your channel, beautiful concept, all the best insh'Allah
This is awesome Maya, thank you for sharing.
Great presentation of the use cases and functionality. Thanks, Maya! 🙂
Many thnakns, you just saved me a kot of time trying to figure out if compound model with agents will solve a particular problem. I have basic python knowledge and that was going to eat a lot of my time. So thank youuuuuuu! ❤
Their efficiency in handling tasks like data processing and research is astounding. Have you ever attempted to coordinate several AI agents with SmythOS?
Impressive video. The Reddit scrapper, thinking about changing ollama settings :D I'm sad your system limited your choice of LLM. But now I'm really motivated to try on my system, to test Mixtral.
This is one of the best videos on ai assistants.! 🎉 thank you Maya!
This is an incredible guide!!! Thank you so much for making this video
This was probally the most helpful video I have ever watched.
Watching from BerylCNC... what's your opinion on using CrewAI or AutoGen versus creating a GPT within OpenAI, and providing instructions that frame out the functionality in a similar way? My development work is mostly related to CNC tool path utilities. LLMs do a poor job of inferring and understanding geometry, so I have to bake in a lot of rules and math. GPTs seem like a cool way to get noticed, but I really need to include libraries and Python code. One of ours is called "Beryl of Widgets", and it helps makers figure out what to make and sell with the tools they have available. It could be so much more with CrewAI, I think, but then I need a way to deploy it. Great content, thank you!
Maya, wonderful! I love learning from you. About local models, I bet Nous-2-Pro-7B could do a good job but have yet to try it. Keep up the good work!
I watched Matthew Berman’s video on AutoGen, what made you decide to use CrewAI and have you tried/compared it to AutoGen?
Excellent review. New subscriber earned 😊. Would be interesting to see your take on Opengen Studio and compare this with Crew AI.
crewAI UI is coming 😎👉👉
This is like a book in one video, thank you so much! Just curious, but how would you compare the latest autogen studio to crew ai? Lots of wonderful ideas here and beautifully presented, thank you so much for publishing this, you are indeed a knowledge sharing master and the world needs more intellectual contributions like this. Thanks again!
thanks a lot! I'm working on autogen studio video and I'll compare it to crewai
Hello, just found your channel. I was expecting some mediocre video under a somewhat clickbait title, but I quickly realized this was some actually interesting content, and I am quite impressed by this thing. will definitely give it a try, although I can already see the terrible social and economical impacts of using it in the real world in enterprises.
PS: at the end, it sounds to me like what you did to get your result was overfitting.
thanks a lot :) yeah you're right, I didn't even think about it, but overfitting might be the problem!
Yes well said definitely got my attention even clicked the bell for the fourth time ever. Funny intro then actual genius-like content behind it. This channel must be AI already that's the only possibility.... ;]
It's very calming to listen to you. Doesn't happen with technical videos a lot. Great vid!
Thank you for your great research and video. Concerning Daniel Kahneman´ system 1 and system 2, the French neuroscientist Olivier Houdé proposes a continuation of this theory, based on the latest neurology discoveries. He published a short book called “L’Intelligence humaine n’est pas un algorithme.” that is easy to read and understandable, and as it is short, it might be easy to translate to English with an LLM.
Hi Maya! I’m so fascinated by your technical abilities. I barely understand the whole thing, but I’ve always been fascinated by AI and started learning AI tools.I saw on your tiktok that you taught yourself Python. It would be awesome if you can also share your learning process coming from a non-tech background and how’s your progress so far. Thank you :)
Thanks! It's a great idea, I'll definitely make a video about that :)
On 8GB VRAM I had no problem running 10B or 13B models, however I run Q5 gguf. On 24GB VRAM I am able to run 70B Q4 gguf. For slow tasks it's acceptable speed.
Thank you for the video. At the moment, I am experimenting with Crewai and Autogen (it uses cheaper GPT4 turbo) - these tools are improving every month. In practice, I still achieve better results when I closely collaborate with LMMs - but who knows, in 6-12 months it might be possible to fully automate my workflows.
thanks for the feedback! that's interesting, I also can only automate parts of my work that require processing big amount of data. but who know what's going to be possible in 6-12 months!
@micbab-vg2mu Could you share insights into where you create your workflow for optimal results? I'm curious to know if you have any specific advice or insights for optimizing your workflows with LLMs? Any tips you can share would be appreciated!
Hello, could you share your thoughts about crewAI vs Autogen? Which one provides better results? Maybe which one is simpler to use? Or which one gives more opportunities?
Adam - obie metody są bardzo prose w użyciu ( nie mam wykształcenia IT i daje radę). Jeśli planujesz open source - rekomenduje CrewAI jeśli GPT4 to Autogen2. Mimo że workflowy nie sa perfekcyjen to wart je znać - )@@aszmajdzinski
Would be interesting to see if combining MemGPT with one of these LLMs might help as your problem is most likely a teeny tiny context window - it may be your instructions are getting lost when combined with all the data taken from the tools. I believe the creator of CrewAI is looking into this
I had this thought as well.
Great video and LLM review.
As I loved the video, I'm going to create my agents 🕵♀; Thank you Maya! 😄
Good job on this video. I've used crewAI and Ollama. Always looking to see how other people are using this stuff.
Amazing video 🤘
Your channel is really good! Eage to see more about te AI content
Thanks for the info. Didn't even knew about Crew AI.
Will be learning more about this CrewAI. I have been using ollama to run LLMs on my Raspberry Pi5. I really need more Pi5s to run in parallel.
I have been considering doing this. Thanks Maya.
My understanding is that using a larger quantized model works better. I’m planning on trying it soon on my computer, maybe with autogen. I’ve got a 4090, i9-10900k, and 64Gb RAM, so I’m hoping I can run maybe a 30b quantized model on it.
I read that the ~5-bit quantized models are the sweet spot that reduces your memory footprint without any significant loss in quality of responses. 4-bit is still good but takes enough of a hit to matter. Again, haven’t tried it myself, so maybe I’m mistaken, but that’s what I read.
I have worked with dozens of models and scripts as a non-programmer newbie. This waklthrough worked almost first run! Thanks.
hej kids I have to say, you are just amazing. I am 42, when I grew up we did not have even computers (I am from eastern europe - so first comp in family was for bussines, and it was really old - first phone i got when i was 16 and it was nothing like todays phones and so on)
but I do not care how old I am, or if we had these technologies available back then, because we have it now. And thanks to creators like you I can just try this and be amazed.
i do not think that you know how magical this stuff seems to someone who did not grew up in this era, but you are creating pure magic :) thank you for letting me see behind the curtain and just to try :)
have a lovely day
Thank you so much for lovely message :) fellow Eastern European here - at “computer science “ classes we had ancient and barely working computers and the peak of education was making a power point presentation about printers and cartridges 😅
I’m so glad you like the video and comments like this make me feel so fulfilled and motivated to work harder 💪🏻
I just started to learn about AI and.. I barely understand whats going on here, but I'm fascinated :). I was thinking about possibility to create this kind of agents/assistants for tasks like searching informations about specific topic online. I will follow you :),
Wow, what an intense research on the topic! Thank you for sharing this info with us 🙏
Great work! I'll try it out tomorrow! 👏
great! so much of ur tests and trials, thank you for sharing all
🎯 Key Takeaways for quick navigation:
00:41 🧠 *The video discusses the differences between System 1 (fast, subconscious thinking) and System 2 (slow, deliberate thinking) in the context of AI capabilities, highlighting that current language models primarily operate on System 1 thinking.*
01:48 💡 *Introduces two methods to simulate System 2 thinking in AI: "Tree of Thought" prompting and the use of platforms like CrewAI, which enable the construction of custom AI agents capable of collaborative problem-solving.*
03:26 🚀 *Outlines the process of setting up AI agents using CrewAI, emphasizing the importance of defining specific roles, goals, and backstories for each agent to ensure effective task execution.*
07:22 📈 *Describes how AI agents can be made more intelligent by granting them access to real-world data, and how to avoid fees and maintain privacy by running models locally.*
14:31 💸 *Discusses the cost implications of using models like GPT-4 for AI-driven tasks and explores local models as a more cost-effective and private alternative, despite their varying performance and capabilities.*
Made with HARPA AI
That mic stand placement is wild lol
This looks lovely I can't wait to take charge of work and make it more productive. Can't wait to binge on the rest of your videos newly subbed.
And fire loads of redundant employees ?
very informative thank you. something like a carpet or room dampening could help the sound quality of the stream. thanks again.
Actually a great video to start with AI agents, thanks
I would be interested to know how the local models performed if each model which was adeuqate at each task was used. (ie... using 3 models instead of just one). First video I have watched from you -- very professionally laid out and will be sure to SMASH that Subscribe button.
Great video! Congrats!
Great video! Thanks for all your hard work!
I'm so confused, you load the api env variable, but how is it used?
I like your setup and vibe keep this good work up
So these agents requires something called "Function Calling" in LLM which is enabled in GPT-4. That's why open source models didn't perform well, but I think models that are fine tuned for agents and function calling will do better. Worth a try!
To deal with variability for easy-to-identify properties like "Does it have a link?" You can have code that reruns the model if it finds less than N links in the output. You could even save a little money by checking before the whole output has been written and aborting the current run.
those are great suggestions, thanks!
Sometimes it's faster and more efficient to do the work (write the business plan or blog) yourself as a human rather than spending time and programming AI to do it. But I'm old, with an efficient creative mind. But, good video.
It's just a tool. Simple old human is still the best to get work done!😂
Excellent work... keep going and soon you can just go to bed and stay there for the rest of your life. Amazing!