How to build a ROBUST AI Agent stack [CrewAI + YouTube API + Ollama + Groq + AgentOps]
ฝัง
- เผยแพร่เมื่อ 21 พ.ค. 2024
- In this video, we'll discuss how to create #AI agents that interact with the TH-cam Data API to extract comments from any given video and generate actionable insights. Based on user feedback, these agents can help you understand and create better content.
What you will learn:
================
✅ Installation & Setup: How to get TH-camYapperTrapper up and running with step-by-step instructions.
✅ Configuring Agents & Tasks: Tailor the system to your specific needs by configuring agents and tasks.
✅ Running the Tool: Execute the tool to collect comments and generate insightful reports.
✅ Understanding Outputs: Learn how to interpret and use the generated reports to shape your content strategy.
Features of TH-camYapperTrapper:
============================
🚀 Easy installation with Poetry
🚀 Customizable agent and task configurations
🚀 Automated report generation
🚀 Scalable and flexible architecture for any TH-cam content creator
TIMESTAMPS:
============
0:00 - Introduction
0:18 - Project architecture diagram
3:03 - Setting up CrewAI agents using NEW scaffolding
3:30 - Directory tree setup walkthrough
5:53 - Creating CrewAI agents
18:36 - Getting started with AgentOps and creating an API key
18:51 - TH-cam Data API overview
19:26 - Poerty setup
20:26 - Running CrewAI AI agents using Groq API key
25:24 - AgentOps dashboard overview
26:05 - Running CrewAI AI agents using Ollama
34:10 - Overview of CrewAI+
35:41 - Closing
36:03 - Outro
Support & Community:
==================
🔗 Check out the CrewAI documentation: Here
🔗 Join our Discord community: Join Now
🔗 Visit my GitHub for more tools: Tony's GitHub
🔗 Questions? Chat with CrewAI docs: Chat Now
Don’t forget:
==========
🤗 Like, Comment, and Subscribe if this video helps you!
Share your experiences and suggestions in the comments below.
Connect with me:
==============
🔗 GitHub repo: github.com/tonykipkemboi/yout...
Follow me on socials:
𝕏 → / tonykipkemboi
LinkedIn → / tonykipkemboi
#ai #ollama #groq #crewai #agentops #aiagents #youtube #youtubeapi #contentcreator - วิทยาศาสตร์และเทคโนโลยี
Thank you for sharing this!
Huge fan of your content.
I appreciate that! Thank you.
Oh boy! Oh boy! Oh boy! What a power packed knowledge bomb you just dropped. Man, you are killin' it. Great job, hands down. Started gen ai journey recently, watched a ton of videos, but the information you share, hardly I got from any other video.
Could you please also create a video on fine tuning an llm?
Thank you a million.
Glad you found it useful! I'll add that to my list of future videos. This is the second fine tuning request.
@@tonykipkemboi looking forward to it.
Great tutorial! Perfect for beginners I appreciate it, Bro. Thanks!
By the way:
1. Noticed an error with some URLs, specifically for llama3 Local, after spending some time 😊. It seems to come from the {vedio_id} passed in the agent prompt. It's recommended to use {{video_id}} instead, ensuring compatibility across OpenAI, Groq, and local LLM models.
2. As you mentioned, errors are opportunities for learning. I've now incorporated a 401 check in the function `validate_video_id()`.
3. Encountered an issue creating the `comment.md` file due to an emoji error (UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f64c'). The workaround involved creating both `comment.md` and `report.md` files, ensuring proper handling of comments with emojis in markdown files.
4. Noticed that the `Report.md` didn't include the URL link. To address this, I made the following change in `main.py`:
```python
inURL = input("🚀 Enter TH-cam URL: ")
video_id = extract_video_id(inURL)
inputs = {"video_id": video_id, "url": inURL}
```
5. because of groq rate limit and wanting to test; used a OpenAI (of course limited it GPT3 😜)
```
# OPenAI
self.openai_llm = ChatOpenAI(
temperature=0,
api_key=os.environ.get("OPENAI_API_KEY"),
model_name="gpt-3.5-turbo",
)
```
Thanks again, Bro! Fantastic tutorial! Thanks a lot, Doc! - Srikanth Kamath🤟
This is awesome! Good work finding the solutions for these points. If you're down for it, you can create a PR to the repo and I will merge it as well! Thanks again and am glad you found it useful. Excited to see what you build.
For rate limit - use max_rpm = 2 or 4 for each agent
I tried 2 but still hit the limit pretty quick. I saw the quota page and seems that llama3 70b had one of the lowest limits. Switching to the other models was better.
What a powerhouse of a tutorial wow. Great work thank you!
Thank you! 🙏
you're the man 🕶 :) keep up this great work !
🫡
love this!
Thank you!
Interesting tutorial!
Tony, have you tested to see whether 'backstory' makes any difference to the output?
I have tested this on a few models, and whether the backstory is positive (i.e. describes a human type role), neutral (leave empty or put none), or negative ( give a human role not relevant to task or a non-human role, e.g. cat), seems to make no significant difference to the output generated ..
Ah interesting! I haven't played around with it yet. I know CrewAI is working on releasing an option to change the main prompts that are currently abstracted away in the library.
bless!
🙏
cant you create a manager agent who does what agentops does? and juse your local compute power to complete this?
top top top
🙏
Great video! I'm liiter condused about the text processing after you get the comments from youtube. Is it not necessary to pass through the token and embedding process on that?
Thank you. So for this, the agent is passing all the comments to the next agent who basically throws the entire text into an LLM with a prompt for it to extract meaningful insights from the comments based on the themes like "praises/complaints/etc.." After insights are generated, they are then passed to the report writer agent who just generates the final report. There's a potential to pass these comments into a vector store then do more analytics on it especially if you want to aggregate insights over many youtube videos of the same topic. That might actually be a better strategy!
@@tonykipkemboi Thanks for the answer it's clear now. Now I understand why you were having the token issues, in fact, the vector db could help to get more context and pass through the prompt token issue. Looking forward to seeing more of your videos great job.
Would be great if there was a frontend using Open WebUI together with Groq API.
🚀
Jover
How do we do it with the Claude API?
It might be able to ingest more.
You can swap the LLM with Claude using this LangChain call python.langchain.com/docs/integrations/platforms/anthropic/
@@tonykipkemboi How you set it up originally without it I assume is best though?
Can you deploy these models using Ollama?
Yes, I believe you can deploy with Ollama. You'd just need to download Ollama in your environment and proceed to setup the rest of the code in dev.
Great video, but I got a little dizzy watching your screens consistently moving around, like a pop video.
Thank you for the feedback. I'll make sure to reduce or take out those moving parts next time. 🙏
@@tonykipkemboi Thank you for your acknowledgement. I'm thinking the speed is ok for the majority. I'm 59 and the speed plays a little havoc with my eyes. I will understand if you cannot change your flow. It would have been a lovely rhythm and pace for me, maybe up until 5 years ago :) I learned so much from this tutorial. Well done!
Hello, one little question if you could help me. ¿How can I pass the result of one task to another arbitrary task ?
Do you have one agent create a response and you're trying to pass that to the next agent? If yes, then you can add the response file to the tasks.yaml file with explicit instructions for it to pass that output to the next agent.
Yapper trapper 😂
Glad you share the same humour 🤣
How cost is calculated if you are using groq for free 😐!!! I am still having some drawback in integration of agentops with crew
The costs shows as free since the Groq API is still in beta and free to use atm. Am sure once they start charging like OpenAI, then you'll see the price . What reservations do you?
@@tonykipkemboi Initially, when I was running the crew without integrating AgentOps, I noticed a significantly quicker response time. However, upon integrating with AgentOps, the performance has notably decreased, with response times now exceeding 30 minutes.
Additionally, I've observed that while I receive some overview of the progress, there's a lack of detailed information in the session drill down. This raises the question of whether this discrepancy is due to Groq not being fully integrated with AgentOps.
This slowdown in performance and lack of detailed session drill down is concerning, as it impacts our efficiency and ability to effectively monitor and manage the crew runs.
AgentOps team here- A few of us were running into this and pushed a fix on agentops==0.1.8. Should track normally now!
@@RICHARDSON143 the team responded below! Upgrade to the latest version of AgentOps. Let me know if you see any difference.
@@agency_ai this is awesome! 🙌
Does this interests you?
1. CodeCraft Duel: Super Agent Showdown
2. Pixel Pioneers: Super Agent AI Clash
3. Digital Duel: LLM Super Agents Battle
4. Byte Battle Royale: Dueling LLM Agents
5. AI Code Clash: Super Agent Showdown
6. CodeCraft Combat: Super Agent Edition
7. Digital Duel: Super Agent AI Battle
8. Pixel Pioneers: LLM Super Agent Showdown
9. Byte Battle Royale: Super Agent AI Combat
10. AI Code Clash: Dueling Super Agents Edition
Wreaks ChatGPT to me 😢
The idea and yes your very good at analyzing this content but it was just an idea that I'm trying to see would be favorable in the community but I am a real person who created this idea
Thank you for your input starting a ground level project if you might be interested looking for intuitive people for creative ideas this is mainly a fun project that could potentially be an income