- 58
- 115 247
The How-To Guy
United States
เข้าร่วมเมื่อ 27 ม.ค. 2012
How to build a ROBUST AI Agent stack [CrewAI + YouTube API + Ollama + Groq + AgentOps]
In this video, we'll discuss how to create #AI agents that interact with the TH-cam Data API to extract comments from any given video and generate actionable insights. Based on user feedback, these agents can help you understand and create better content.
What you will learn:
================
✅ Installation & Setup: How to get TH-camYapperTrapper up and running with step-by-step instructions.
✅ Configuring Agents & Tasks: Tailor the system to your specific needs by configuring agents and tasks.
✅ Running the Tool: Execute the tool to collect comments and generate insightful reports.
✅ Understanding Outputs: Learn how to interpret and use the generated reports to shape your content strategy.
Features of TH-camYapperTrapper:
============================
🚀 Easy installation with Poetry
🚀 Customizable agent and task configurations
🚀 Automated report generation
🚀 Scalable and flexible architecture for any TH-cam content creator
TIMESTAMPS:
============
0:00 - Introduction
0:18 - Project architecture diagram
3:03 - Setting up CrewAI agents using NEW scaffolding
3:30 - Directory tree setup walkthrough
5:53 - Creating CrewAI agents
18:36 - Getting started with AgentOps and creating an API key
18:51 - TH-cam Data API overview
19:26 - Poerty setup
20:26 - Running CrewAI AI agents using Groq API key
25:24 - AgentOps dashboard overview
26:05 - Running CrewAI AI agents using Ollama
34:10 - Overview of CrewAI+
35:41 - Closing
36:03 - Outro
Support & Community:
==================
🔗 Check out the CrewAI documentation: Here
🔗 Join our Discord community: Join Now
🔗 Visit my GitHub for more tools: Tony's GitHub
🔗 Questions? Chat with CrewAI docs: Chat Now
Don’t forget:
==========
🤗 Like, Comment, and Subscribe if this video helps you!
Share your experiences and suggestions in the comments below.
Connect with me:
==============
🔗 GitHub repo: github.com/tonykipkemboi/youtube_yapper_trapper
Follow me on socials:
𝕏 → tonykipkemboi
LinkedIn → www.linkedin.com/in/tonykipkemboi/
#ai #ollama #groq #crewai #agentops #aiagents #youtube #youtubeapi #contentcreator
What you will learn:
================
✅ Installation & Setup: How to get TH-camYapperTrapper up and running with step-by-step instructions.
✅ Configuring Agents & Tasks: Tailor the system to your specific needs by configuring agents and tasks.
✅ Running the Tool: Execute the tool to collect comments and generate insightful reports.
✅ Understanding Outputs: Learn how to interpret and use the generated reports to shape your content strategy.
Features of TH-camYapperTrapper:
============================
🚀 Easy installation with Poetry
🚀 Customizable agent and task configurations
🚀 Automated report generation
🚀 Scalable and flexible architecture for any TH-cam content creator
TIMESTAMPS:
============
0:00 - Introduction
0:18 - Project architecture diagram
3:03 - Setting up CrewAI agents using NEW scaffolding
3:30 - Directory tree setup walkthrough
5:53 - Creating CrewAI agents
18:36 - Getting started with AgentOps and creating an API key
18:51 - TH-cam Data API overview
19:26 - Poerty setup
20:26 - Running CrewAI AI agents using Groq API key
25:24 - AgentOps dashboard overview
26:05 - Running CrewAI AI agents using Ollama
34:10 - Overview of CrewAI+
35:41 - Closing
36:03 - Outro
Support & Community:
==================
🔗 Check out the CrewAI documentation: Here
🔗 Join our Discord community: Join Now
🔗 Visit my GitHub for more tools: Tony's GitHub
🔗 Questions? Chat with CrewAI docs: Chat Now
Don’t forget:
==========
🤗 Like, Comment, and Subscribe if this video helps you!
Share your experiences and suggestions in the comments below.
Connect with me:
==============
🔗 GitHub repo: github.com/tonykipkemboi/youtube_yapper_trapper
Follow me on socials:
𝕏 → tonykipkemboi
LinkedIn → www.linkedin.com/in/tonykipkemboi/
#ai #ollama #groq #crewai #agentops #aiagents #youtube #youtubeapi #contentcreator
มุมมอง: 7 060
วีดีโอ
How to create the ULTIMATE Ollama UI app with Streamlit
มุมมอง 8Kหลายเดือนก่อน
In this tutorial, we'll build a full-fledged Streamlit app user interface to interact with our local model using Ollama! I chose Streamlit because it is easy to get started and very composable. Before starting, download [Ollama](ollama.com/) on your local machine. Enjoy, and please leave your feedback in the comments! TIMESTAMPS: 0:00 - Introduction 0:47 - Preface 1:44 - Code directory walkthro...
How to chat with your PDFs using local Large Language Models [Ollama RAG]
มุมมอง 57K2 หลายเดือนก่อน
In this tutorial, we'll explore how to create a local RAG (Retrieval Augmented Generation) pipeline that processes and allows you to chat with your PDF file(s) using Ollama and LangChain! ✅ We'll start by loading a PDF file using the "UnstructuredPDFLoader" ✅ Then, we'll split the loaded PDF data into chunks using the "RecursiveCharacterTextSplitter" ✅ Create embeddings of the chunks using "Oll...
How to stream CrewAI Agent steps and thoughts in a Streamlit app [Code Included]
มุมมอง 5K2 หลายเดือนก่อน
In this video, I walk through creating a callback handler to stream the CrewAI agent's thoughts/steps on a Streamlit app under the `st.status` container! I used an example app where we have AI Travel Agents to whom we give our current location, destination, and time range for vacation, and they generate an itinerary for 7 days! You no longer have to use the REPL to monitor the agent process! 😃 ...
How to build the FASTEST AI chatbot with Groq and Streamlit
มุมมอง 2.3K2 หลายเดือนก่อน
Learn how to build a Streamlit AI chatbot using Groq, the fastest LLM inference API. We will go over the code for building the app to include a menu option to select the model type and also a slider to choose the tokens. LINKS Streamlit app used in the demo → groqdemo.streamlit.app/ 👨💻 Code in GitHub → github.com/tonykipkemboi/groq_streamlit_demo ♻️ Venv videos ↓ - th-cam.com/video/xMDh4TYoIB...
Automate upgrading pip in a Python virtual environment [venv]
มุมมอง 9024 หลายเดือนก่อน
This tutorial shows you how to create a virtual environment and upgrade Pip using a simple shell script, saving you time and effort. In this video, you'll learn: ✅ A quick method to automate venv creation ✅ How to upgrade Pip effortlessly Subscribe for more Python tips and tricks! Timestamps ↓ 🎬 0:00 - 0:18 : Intro 👨🏽💻 0:18 - 1:20 : Code 🛣️ 1:20 - 1:53 : Adding script to PATH 🏃🏽♂️ 1:53 - 2:30...
How to use ChatGPT API with Python
มุมมอง 356ปีที่แล้ว
In this video, we will cover everything you need to know to get started with ChatGPT API, including signing up for ChatGPT and obtaining API keys. We'll also show you how to use ChatGPT API to generate responses to user queries and how to run the code on Google Colab Notebook. At 4:42, we will demonstrate how to build a ChatGPT clone using the API. At 12:24, we'll showcase the ChatGPT clone in ...
How to get $ETH for Goerli testnet development
มุมมอง 330ปีที่แล้ว
How to get $ETH for Goerli testnet development
How to create a Python Virtual Environment (Beginner Friendly)
มุมมอง 247ปีที่แล้ว
How to create a Python Virtual Environment (Beginner Friendly)
How I made $132.10 with 83 lines of Python!
มุมมอง 4472 ปีที่แล้ว
How I made $132.10 with 83 lines of Python!
How to get FREE testnet $MATIC tokens for development
มุมมอง 6K2 ปีที่แล้ว
How to get FREE testnet $MATIC tokens for development
How to get FREE devnet $SOL (SOLANA) and $USDC from faucets
มุมมอง 9K2 ปีที่แล้ว
How to get FREE devnet $SOL (SOLANA) and $USDC from faucets
Fix Jinja2 error in Docker getting started Tutorial
มุมมอง 1152 ปีที่แล้ว
Fix Jinja2 error in Docker getting started Tutorial
How to get FREE $ETH tokens on Chainlink faucet for development
มุมมอง 5K2 ปีที่แล้ว
How to get FREE $ETH tokens on Chainlink faucet for development
A Decentralized Autonomous Organization Project Demo (KenyaDAO)
มุมมอง 792 ปีที่แล้ว
A Decentralized Autonomous Organization Project Demo (KenyaDAO)
How to make a word cloud with Python [Beginner Friendly]
มุมมอง 762 ปีที่แล้ว
How to make a word cloud with Python [Beginner Friendly]
How to scrape websites using Python and beautifulSoup
มุมมอง 4862 ปีที่แล้ว
How to scrape websites using Python and beautifulSoup
How to Check if Two Strings are Anagram with Python Code 🔥
มุมมอง 582 ปีที่แล้ว
How to Check if Two Strings are Anagram with Python Code 🔥
How to Auto Accept Facebook Friend Requests in few lines of JavaScript
มุมมอง 6492 ปีที่แล้ว
How to Auto Accept Facebook Friend Requests in few lines of JavaScript
Thank you for sharing good content
🙏
Can you show how to integrate human input on streamlit
is it possible using this we can extract data from pdf and convert to proper JSON format?
thanks for the tutorial ! how can I make the model to give answers in a different language?
Thanks, i dont see where you can tell to handle other langage than English ?
I have been thinking of making TH-cam videos to help people learn AI. I am an AI engineering intern at a startup which integrates AI agentswith multiple tools. I have decent experience with AI, I want ppl to learn and build AI apps. Can you give me some tips?
it is good to know that ollama has openai compatibality.
Please make with javascript
Unfortunately, I only do Python on this channel. It shouldn't be too hard to convert the code to JS though using even ChatGPT.
love you bro!
Love you too!
You are very creative using tech to solve problems. Great work!
Thank you! Cheers!
Appreciate your work, wanted to know can i use it for confidential pdf. is there will be any chances of data leak ??
Thank you for the kind words. Yes, if you use Ollama models like we did on the video, then your content will stay private and not be sent to any online service. To be sure, I'd recommend turning off your WiFi or any connection once you've loaded all the dependencies and imports. You can then run the cells to lead your PDF to a vector db and chat with it. After you're done, you can delete the collection where you saved the vectors of your PDF before turning your connection back on. This is an extra measure to give you peace of mind.
i installed ollama, and verified on powershell of my windows laptop,when i ran "!ollama pull nomic-embed-text" it is showing " /bin/bash: line 1: ollama: command not found" PLEASE HELP ME, ONLY YOUR VIDEO ON THE WHOLE TH-cam IS SAVING MY LIFE, PLEASE REPLY AS SOON AS POSSIBLE
So it seems to be an issue with Ollama installation on Windows. I haven't tried installing Ollama on Windows but might be a good time to add a tutorial on that, maybe. Have you tried watching other tutorials or docs on how to set up Ollama on Windows?
@@tonykipkemboi okay that’s kind of you. The problem is not with installation I guess, im successfuly running on powershell and command prompt. The message is appearing on colab notebook.
@@Justme-dk7vm ah I see. So you're using it in colab instead of "Jupyter Lab" locally? I would suggest starting with using it on Jupyter Lab. You just need to install it using "pip install jupyterlab". I haven't ran it on colab but am sure it's possible.
@@tonykipkemboi Okay thankyou so much. I was just scrolling through your videos, it amazed me, you are damn Sir ❤️I would love to get connected with you on linkedin, could you please provide the link.
@@tonykipkemboi hey I tried on Jupyter lab today as you said, I'm not getting that error like previous. But when I entered a query, its taking so much time to load. How to resolve this?
chromadb works with sqllite 3. facing lot of issues using chroma. can we use any other db or just pcl the entire vector db
You can definitely replace chroma with any other db like Weaviate or Qdrant or Milvus and so on.
Thanx man ! It worked 👌
@@nitinkhanna9754 awesome!
How can we get output without rephrasing? I mean i want to know what exactly written in PDF as it is. for example if i say what is written in article 3.2.2 and output will be in quotes word to word?
Ah yes, good idea. I think for this, you'll have to add citations. I'm early into playing with this as I am working on the Streamlit UI for RAG. Always good to have cited sources.
nicely done
Thank you 😊
Hello ! nice tutorial. I was stuck on the first part unfortunately as I get the error: "Unable to get page count Is poppler installed and in PATH". Do you have any idea how to solve this ? I have already installed poppler using brew.
Thank you. Have you tried using chatgpt to troubleshoot?
Thanks a lot! If we have a mix of multiple PDFs, Words or Excel files, how can we change the RAG to support retrieval of them?
Glad you found it helpful. For different file types, you would consider the loading/parsing and chunking strategies that fit those data types. I'm working on the next video which I will go over CSV & Excel RAG.
Are the libraries you used (langchain , chromaDB ...) open source? and can we use any ollama model?
yes and yes
you're initial ingestion, it doesn't load the first page, it ingests the entire document. Your data variable consists of a list of a single Document object, that will contain the content of the entire pdf
That is correct. I did not change the code after testing it previously with loading individual pages. You can load by page and add metadata that way.
@@tonykipkemboi but cool tutorial for summarisation using a multi query retriever. I didn't know this was a thing in langchain
@@madhudson1 thank you. Yes, it's a neat function
Can we do this with llama3 , which will be more good?
Yes you can use llama3.
Good one, Good luck🤞
Thanks ✌️
Thanks for this amazing tutorial on building a local LLM. I applied it to my research paper PDFs, and the results are impressive.
Awesome 🤩 Love to hear that! Did you experiment without using the MultiQueryRetriever in the tutorial to see the difference?
@@tonykipkemboi That's an interesting question. I tried and found that MultiQueryRetriever works well in general, when LLM needs to connect indirect information from document, but fails to provide relevant information for direct information present in the document. But, this observation could differ case to case.
Congrats on your Video! In your example you use just one PDF, I have a demand to work with thousands of documents, and the main issue is the time consumption to upload the videos. Can you give me some advice?
Did you mean to say it takes time to upload the documents to vector store and query over them? If yes, I do agree with you that latency is an issue especially since we're adding another layer of retrieval using the MultiQueryRetriever. It would also depend on your system as well if you're using Ollama.
You are a legend 🫡 Thank you !!!
❤️🫡
very detailed explanation, thanks, can you please make the same project to give responses in multi-language and with voice output?
Thank you. Yes that would be cool. I can see the challenge coming from finding an open source model that is good at multiple languages. The ones I used are not great at all. For voice, it'd probably be easy to use an open source TTS or even be more granular and use 11labs for a better quality in spite of it not being local.
great walkthrough, the audio can be increased a little bit...
Thank you! 😊 I noticed that I didn't adjust my gain after I had posted. Thanks for your feedback.
Hello friend, thank you very much for your content. I have a question, how can I make it listen to my server within Google Collab so I don't have to use Jupyter, since my resources are a bit limited?
Hi Tony Ollamaembeddings and the gemma chat model, did you run it locally? Your response would be helpful.
Yes, all the stuff in the video is running locally
Should I make Chroma DB connection to make this work?
We do use Chroma in the tutorial.
Great meeting you! Would love to see more about graph applied to generative ai. P.s. diggin' the boondocks vibes!
😎 thank you brother! Yes, got a lot of ideas floating around that I need to start working on. Once most of my summer work travel is done I'll be back to the video grind.
thanks for this tony
🙏
What is the python version you used for running this poc
Python 3.9
cant you create a manager agent who does what agentops does? and juse your local compute power to complete this?
from the Kenyan homeland
Kabisa bro 😎
@@tonykipkemboi am, vouching for u bro, kitu yoyote mpya about Ai, LLMs, etc ikitokea, we weka hapa asap, we're fully behind you
Badly need a video on stable RAG that you were going to integrate in this ultimate UI ! When is it coming out ??
I'm going to probably work on it next month. I've been traveling a lot for work the last few weeks and have time had time to shoot.
@@tonykipkemboi Much awaited! Its a special request please try to release it as early as you can.
@@tonykipkemboi yes bro, waiting for that
Iam not able to install --q unstructured langchain and --"unstructured[all-docs]". It is taking long time and didnt installed Can you please help
You can try installing them directly on your system using the terminal or close the notebook and restart.
Thank you so much
Can I ask one more question I get all good. But in the last chain.invoke when I ask the questions it is taking a very long time to run and there pops up an error. __init__() takes 1 positional argument but 2 were given. Can you help me with that as well?
@@hibakabeer1685 so the latency is potentially introduced because we used the MultiQueryRetriever function which generates 5 more queries similar to your original question then sends them to the vector db to get context on them too. For the error, can you show me the code portion that is failing and also add the full error message?
Sure
I got this error when running your code on colab: "ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. imageio 2.31.6 requires pillow<10.1.0,>=8.3.2, but you have pillow 10.3.0 which is incompatible." Could you help me to check?
The error message indicates a conflict between the versions of `imageio` and `Pillow` packages. Here's how you can resolve this issue: 1. **Uninstall the current version of Pillow:** ```bash !pip uninstall pillow -y ``` 2. **Install the compatible version of Pillow required by imageio:** ```bash !pip install pillow==10.0.0 ``` 3. **Reinstall imageio to ensure all dependencies are correctly aligned:** ```bash !pip install imageio --upgrade ``` Here’s how you can run these commands in a Colab cell: ```python !pip uninstall pillow -y !pip install pillow==10.0.0 !pip install imageio --upgrade ``` This sequence will uninstall the conflicting version of `Pillow`, install a compatible version, and ensure `imageio` is up to date. This should resolve the dependency conflict you are encountering. Let me know if it works.
@@tonykipkemboi tks a lot
hey, thanks for this. Question. Does it have limitations on the number of documents one can upload to chat with? Like can I upload thousands of documents to use?
I haven't tested it with many documents but will do.
Will appreciate a lot. Much love from Kenya btw😃
@@AfrivisionMediake 🫡
Well Done Mister, been playing with the crewai Trip planner, Sreamlit is a great UI am going to it, Asante sana
This is a great tutorial. Thank you
🙏
Useful tip : use a proper wifi dont use Mobile hotspot while pulling the model from ollama ,i had a error with that ,hopes it helps someone😊
You are a awesome teacher, thank you so much to explain this in a clean and objective way :)
🙏
Hello, one little question if you could help me. ¿How can I pass the result of one task to another arbitrary task ?
Do you have one agent create a response and you're trying to pass that to the next agent? If yes, then you can add the response file to the tasks.yaml file with explicit instructions for it to pass that output to the next agent.
Yapper trapper 😂
Glad you share the same humour 🤣
I did some first experiments with local AI, using Ollama and AnythingLLM to talk to the model about a pdf file... and so far, the results are just completely unusable. The AI is just hallucinating on me constantly, making up sentences in the pdf that are not there, failing simple tasks like "quote the first line on page 2 without changing it", not to mention more complex tasks like "list all tools mentioned on page 3". Maybe I'm doing something wrong, but I feel very discouraged from using AI at all for this kind of usecase.
Sorry to hear the troubles but this is very common. Have you tried setting the temperature of the model to 0? That way there's no room for it to be creative.
@@tonykipkemboi Interesting, I'll look into that thanks!
@@user-eh2zd2ih8v let me know what comes of it.