@davidondreji with you program can I build my own private agents And what’s the kick off price I asked Chat gpt and it said 5.75 usd to have 7 agents with a over seer
So the first 60% of this video built up the expectation that we are gonna use offline llama3 py agents, but then at the very end you switch it to using the llama3 available through groq's api. Although you do get the agent working with llama3, its a bit misleading, and it would have been better to straight up say: I havent got Llama3 working offline*, but here is how I got it working through groq's API. *Edit for clarity: llama3 working offline with crewai in the context of this tutorial. *Edit 2: Others have tested the offline ollama3 model recently, and state it now is working. At the time this video was recorded, CrewAI wasn’t collaborating properly with Ollama3 offline-and that was the issue, which should now be fixed.
@@ardagunay4699I think it has more to do with crewai not being correctly configured with the new llama model. It’s possible to use llama2 offline to do the same project, but if you repeat the same steps for llama3 there is a clear breakdown in the crewai step of the process
I also noticed that as of 2024-04-27 the Llama3 (local LLM) does not work with CrewAI. However, you can replace Lllama3 with Erik Hartford's excellent model "dolphin-llama3" and you get the expected result. Dolphin-llama has the additional advantage of being uncensored. Cheers! Keep up the good work!
I have no coding experience and i copied every code you wrote and my crew worked without groq, which is really something! Thanks for the tutorial a lot
more then anything i appreciate your showing when and where these processes dont work . the trouble-shooting is a critical part of process and the overhype of these systems is most deceitful when the user actually tries to integrate them and runs into all sorts of issues that were hidden by showmen . really excited for llama3 finetunes and more powerful agentic systems , thinking recursive self-debug and finetuning for the generation of the most understandable debugable code , with proofs and tests , could build a solid foundation .
Bro I just want to say thank you for making this content. It's always super informative in an easy-to-understand format. Out of the 50 (exaggeration, but there's alot) I always find myself looking for your videos first. I've always had an interest in programming and hacking but didn't do much with machine learning. But know I'm a man obsessed. Mainly because of how critical it is for normal civilians to learn how create and train these things. I truly belive the future of humanity depends on it. If corporations kill us all building agi and it escapes especially Gemini we are so screwed because the odds are against us that it finds any value I'm something(humans)that's killing the thing it exists on or believing we will attempt to shut her down. Or government with runaway military ai because they couldn't wait until all the bugs were out when they deploy it.
@@Instant_Nerf @Instant_Nerf With any big model, you're not going to be able to make much use of any consumer GPU like the 3090. He can run the 8B parameter model with it, but the most sensible route is cloud computing for stuff that's big, which he is doing with Groq. If you're going to run LLM's an absurd amount of time, sure. Get a rack of GPU's or get a high end server processor with large amounts of fast server memory. But for most people, this is not a good use of money.
I appreciate the clean simple video. I didn't run into the same issues as you with Llama 3 though. Still it gave me enough to get my head around this though.
So this started as a good tutorial, ran into some issues and kinda just ended. I did manage to get the groq to work in the end. I do have Llama running on a Docker container. Now i would like to combine both of those . Thanks for the tutorial
Introduction to Building AI Agents - 00:00:00 Performance Comparison of Llama Models - 00:00:35 Getting Started: Required Tools and Downloads - 00:01:06 Downloading and Setting Up Llama Models - 00:01:36 Basic Chat with Llama Locally - 00:02:28 Setting Up VS Code and Writing Initial Code - 00:02:56 Installing Required Packages - 00:03:02 Defining and Importing Models and Packages - 00:04:04 Creating the Email Classifier Agent - 00:04:37 Creating the Email Responder Agent - 00:06:25 Defining Tasks for the Agents - 00:07:00 Defining and Running the Crew - 00:07:37 Initial Run and Troubleshooting - 00:08:13 Adding the Gro API for Better Performance - 00:09:26 Final Setup and Testing with Gro API - 00:10:04 Conclusion and Call to Join the Community - 00:12:07
Yea, if you still have to pay for some damn service... I'm having issues getting autogen working with llama3:8b-instruct-fp16 and the teachability module(runs at 42+ t/s though!) It almost never decides to flag things as important/worthy of remembering!... But just started messing with that today. If you have a solution to using agents with only localLLM, no api keys, please let us know! TL;DR - it fails to understand it's asked to basically form a question about what it's supposed to store in the db, so that it could be found that way, and the analyzer just keeps asking this same question every time. Probably need a better analyzer?..hm -------------------------------------------------------------------------------- teachable_agent (to analyzer): Imagine that the user forgot this information in the TEXT. How would they ask you for this information? Include no other text in your response. -------------------------------------------------------------------------------- analyzer (to teachable_agent): What is the context or background information mentioned in the provided text that I should be aware of? Can you remind me what important details are missing from the passage and need to be recalled? --------------------------------------------------------------------------------
@@PrinzMegahertz I'm interested in your setup. I'm using it with LMStudio and getting the same result ... the Executor kicks off, and the GPU ramps up, but nothing happens.
@@richardchinnis I tried with crewai and llama3:8b locally on my computer, 15 minutes later, it still is stuck at > Entering new CrewAgentExecutor chain...
I would like to know your thoughts on this: With a limitation that restricts the potential of Llama materials to enhance other major language models. Researchers and developers often want to compare or fine-tune different models to improve their performance or tailor them to specific task? However, due to the restrictions in the licensing terms, they cannot freely utilize the Llama materials to do so unless they specifically use Llama 3.
would really be cool how you could get agents to understand it and create a working sample.. its weird as its knows what it is, but can calculate simple.. please give it a try or anyone else ai interested
The issue with these third parties is that there is lot under the hood and you will end with a lot of api calls and billing aspect is to be considered.
Man, you’re hilarious 😂😂😂 After 10 minutes of video - we can delete all of this and follow official crewai instructions ) But anyway, thanks, video have some gems, but probably not for those who are looking for entry level video.
thanks for this! currently trying to figure out what's missing with my groq connection since I'm encountering this error: openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it.
GREAT VID! FINALLY THE INSTRUCT!! ...may I just ask a sidebar question Re. Your VS Code editor window behavior? PLEASE?!! How have you set your VS Code preferences so that the longer length strings you've written for the classifier and responder classes (specifically, the strings stored in 'goal' and 'backstory') when they reach the edge of the editor window they wrap to the next line down, WITH THE NEXT WORD CONTINUING FROM THE CORRECT INDENTATION POSITION (Directly beneath the declaration like: To demonstrate/explain here: || = indicated the edge of the editors window Your editor looks like this: responder = Agent( (\t) goal = "qwer|| tyabcdefghijklmnop", ) My editor looks like this: responder = Agent( (\t) goal = "qwer|| tyabcdefghijklmnop", #
I want something to experiment with agents and get the handle of it and experiment on how much better it can help me at work, without spending any money or having particular premium keys. I've watched a lot of videos but i still don't understand what agent builders allow free to use agents, even if it's on a daily token limit.
Anyone who is not ultra-rich will never pay 77 USD just to be in your community. It is simply insane. I suggest you take a different approach. Because it wont work.
I appreciate the value you offer with your community, but I want to be honest about my perspective. The current membership fee of 77 USD is simply too high for many, including myself. I understand that there are costs associated with maintaining the community and providing value to the members, but I wonder if there is room for a more accessible membership fee. A fee that is feasible for more people and enables them to participate and expand their knowledge.
We always have a choice - we can either stick our nose into other people's business and give unsolicited criticism, or we can start with ourselves, like earning more and not making ourselves look like a victim.
I'd pay it to be in with this crew if I could. Maybe someday but not today. Education is expensive. Life is rough, it's even rougher when your stupid. I appreciate your videos sir.
He stored the Groq API key as an OpenAI API key variable with os.environ["OPENAI_API_KEY"], so when llm = ChatOpenAI(model = "some model") is called, it will automatically switch out "some model" against the variable defined in os.environ["OPENAI_MODEL_NAME"], which he set it to be "llama3-70-b-8192". Finally, he had to specify the url from which the model is accessed, so he set os.environ["OPENAI_API_BASE"] to some Groq related url.
Hey guys! I'm immersed in the study of AI agents and I'm curious: would it be viable to build an agent that prospects customers for freelance professionals? I envision a system capable of exploring Instagram in an automated way, identifying potential customers and even starting conversations to schedule sales meetings. Is it possible to develop such AI agents? If so, do you know of any videos on TH-cam or any mentors that explain how to create AI agents to automatically prospect customers through Instagram? Is there something like this in development or is this an idea for the future of AI?"
Hey brother, thank you very much for your channel. I’m a single father, I’ve been following this AI stuff closely. I’m also in school, and have so many coals in the fire it’s not even funny lol. Thank you for your posts, because when I get a decent computer I’ll be able to quickly jump on board. I grew up very poor, my son will have a better life. I need to be on top of this. My next pay check I’ll be joining your community. Any advice on how to get my hands on a decent computer? To run this stuff? What should it have? I don’t want to miss the opportunity to provide for my son
I'm a 3d artist and not a programmer by any stretch of the imagination. Is there a change we could have CrewAI with a nice user interface and an installer?
TLDR: Don't bother with this video if you need to run locally He gets 9:38 in, can't get it working with a local Ollama model so just gives up and switches to a remote model. Really annoying if you're coding along with the video then realise it's useless for your purposes. I hope his premium content is better than this otherwise a bunch of people are getting taken for a ride.
When I use llama 3 8B on ollama or LM Studio, it is much dumber than on OpenRouter. Even after resetting all parameters to factory and loading the llama 3 preset. Even with the full non-quantized 8-bit version on LM studio.
Thanks for the video! One question: any clue why do you set OPENAI_API_KEY with groq api key? I found it a bit confusing. Especially when using openai's API_KEY for authentication. Is OPENAI_API_KEY a placeholder in crewai for groq api key? I know a bit nonsense, so what do I missing? Thanks!
I have made state of the art automation scripts for me work and i also added some stealth web scraping methods, how can i train lama model to use my coding methods??
Can You make some better example of this agents. Something that is really helpfull, you always say thay for questions of time you do something basic, But it will be really fascinating if you spend more time doing something that have some realistic value. Thanks
Does anyone know if you can run this on in iPad locally and upload documents in order to answer queries? For example if you made an app for allergies on a food menu would you be able to upload ingredients of the food menu into the LLM and have it RAG answers similar to "I have a gluten allergy, can i have the Ceasar salad?"
The easiest way would be to deploy all this stuff on a server or a home PC, expose an end point, then write an ipad app to upload docs and chat with doc via your app.
If you're serious about AI, and want to learn how to build Agents, join my community: www.skool.com/new-society
I cannot agree more with @milosjovanic803. I / We like your videos?
But you might want to think about your offer value @ 77/mth? (Ridiculous!)
I would like to join but i cannot pay using a Credit card. In Holland we use Paypal and Ideal mostly
@davidondreji with you program can I build my own private agents
And what’s the kick off price I asked Chat gpt and it said 5.75 usd to have 7 agents with a over seer
So the first 60% of this video built up the expectation that we are gonna use offline llama3 py agents, but then at the very end you switch it to using the llama3 available through groq's api. Although you do get the agent working with llama3, its a bit misleading, and it would have been better to straight up say: I havent got Llama3 working offline*, but here is how I got it working through groq's API.
*Edit for clarity: llama3 working offline with crewai in the context of this tutorial.
*Edit 2: Others have tested the offline ollama3 model recently, and state it now is working. At the time this video was recorded, CrewAI wasn’t collaborating properly with Ollama3 offline-and that was the issue, which should now be fixed.
I agree. Basically llama3 is not %100 open source as far as I know
@@ardagunay4699I think it has more to do with crewai not being correctly configured with the new llama model. It’s possible to use llama2 offline to do the same project, but if you repeat the same steps for llama3 there is a clear breakdown in the crewai step of the process
Thanks saved 12min of my life
you can have llama3 running local,, i used ollama and just got llama3 model, no problem
@@EccleezyAvicii It literally just came out. He probably hasn't done it yet.
I also noticed that as of 2024-04-27 the Llama3 (local LLM) does not work with CrewAI. However, you can replace Lllama3 with Erik Hartford's excellent model "dolphin-llama3" and you get the expected result. Dolphin-llama has the additional advantage of being uncensored.
Cheers! Keep up the good work!
Nice tip! thanks. I will try that.
Oh thank you !!
You need to use it with OllamaLLM from Langchain. Then it works !
I have no coding experience and i copied every code you wrote and my crew worked without groq, which is really something! Thanks for the tutorial a lot
more then anything i appreciate your showing when and where these processes dont work . the trouble-shooting is a critical part of process and the overhype of these systems is most deceitful when the user actually tries to integrate them and runs into all sorts of issues that were hidden by showmen . really excited for llama3 finetunes and more powerful agentic systems , thinking recursive self-debug and finetuning for the generation of the most understandable debugable code , with proofs and tests , could build a solid foundation .
Bro I just want to say thank you for making this content. It's always super informative in an easy-to-understand format. Out of the 50 (exaggeration, but there's alot) I always find myself looking for your videos first. I've always had an interest in programming and hacking but didn't do much with machine learning. But know I'm a man obsessed. Mainly because of how critical it is for normal civilians to learn how create and train these things. I truly belive the future of humanity depends on it. If corporations kill us all building agi and it escapes especially Gemini we are so screwed because the odds are against us that it finds any value I'm something(humans)that's killing the thing it exists on or believing we will attempt to shut her down. Or government with runaway military ai because they couldn't wait until all the bugs were out when they deploy it.
If you are not in the new-society - check it out. Based on what you wrote - I think you would enjoy it.
Thanks. Very helpful. Waiting for 400b model
I'm praying that the 405B model is better than both Claude 3 Opus and GPT-4 Turbo
Because if it is, the world will no longer be the same.
Why use a Mac if you need a beefy pc? Just curious .. I know Apple has their ai chips besides cpu, gpu.. but a 3090 will smoke it out of the water.
@@Instant_Nerfnot really. Arm is just different
@@Instant_Nerf @Instant_Nerf With any big model, you're not going to be able to make much use of any consumer GPU like the 3090. He can run the 8B parameter model with it, but the most sensible route is cloud computing for stuff that's big, which he is doing with Groq. If you're going to run LLM's an absurd amount of time, sure. Get a rack of GPU's or get a high end server processor with large amounts of fast server memory. But for most people, this is not a good use of money.
Good luck finding a computer that isn't over 15k to do that.
Worked like a charm - amazing and Groq take a bow! Great videos as ever David..you the man!
Thanks 👍
I love your accent and the high level video contents. Thank you ❤
Mr. David you speak so fast like there's no tommorow. But many thanks for this content ❤
BRO - YOU ARE NEXT LEVEL - YOUR FUTURE IS BIG TIME
I appreciate the clean simple video. I didn't run into the same issues as you with Llama 3 though. Still it gave me enough to get my head around this though.
So this started as a good tutorial, ran into some issues and kinda just ended. I did manage to get the groq to work in the end. I do have Llama running on a Docker container. Now i would like to combine both of those . Thanks for the tutorial
This was very very informative and easy to follow! Great tutorial!
Found you today with the Zuck news, and now I like watching you code. 👊
Nice work David! I appreciate your effort to get this code out to us so quickly.
Nothing like claiming one thing in a video title, and then not delivering on it after watching + following for 12minutes
The true scourge on current LLM scene 😂
Hahaha thanks so much brother. I want to stay on the edge with help of a friend such as you.
Awesome tutorial David. I think GPT4 as a benchmark is quite old now. I wonder what agents with GPT5 will look like.
Nice video and tutorial! Thanks a lot! This gave me the head start I was looking for. Subbed and will def keep watching.
awesome explainable. super clear.
that's some top-tier level shit, keep it up
Now we need to find a way to pack everything so we can sell those personal assistants and install those assistants in any website.
Amazing and excellent job, how to make it serve many clients
Great Work!! Thanks for sharing with us.
Introduction to Building AI Agents - 00:00:00
Performance Comparison of Llama Models - 00:00:35
Getting Started: Required Tools and Downloads - 00:01:06
Downloading and Setting Up Llama Models - 00:01:36
Basic Chat with Llama Locally - 00:02:28
Setting Up VS Code and Writing Initial Code - 00:02:56
Installing Required Packages - 00:03:02
Defining and Importing Models and Packages - 00:04:04
Creating the Email Classifier Agent - 00:04:37
Creating the Email Responder Agent - 00:06:25
Defining Tasks for the Agents - 00:07:00
Defining and Running the Crew - 00:07:37
Initial Run and Troubleshooting - 00:08:13
Adding the Gro API for Better Performance - 00:09:26
Final Setup and Testing with Gro API - 00:10:04
Conclusion and Call to Join the Community - 00:12:07
good simple examples showing Groq capabilities
I see David berman replicated this today David
Did you get llama 3 running with CrewAI without using Groq though? Or did I miss that in the video?
Yea, if you still have to pay for some damn service... I'm having issues getting autogen working with llama3:8b-instruct-fp16 and the teachability module(runs at 42+ t/s though!) It almost never decides to flag things as important/worthy of remembering!... But just started messing with that today. If you have a solution to using agents with only localLLM, no api keys, please let us know!
TL;DR - it fails to understand it's asked to basically form a question about what it's supposed to store in the db, so that it could be found that way, and the analyzer just keeps asking this same question every time. Probably need a better analyzer?..hm
--------------------------------------------------------------------------------
teachable_agent (to analyzer):
Imagine that the user forgot this information in the TEXT. How would they ask you for this information? Include no other text in your response.
--------------------------------------------------------------------------------
analyzer (to teachable_agent):
What is the context or background information mentioned in the provided text that I should be aware of? Can you remind me what important details are missing from the passage and need to be recalled?
--------------------------------------------------------------------------------
For me, it works well with LM-Studio.
@@PrinzMegahertz I'm interested in your setup. I'm using it with LMStudio and getting the same result ... the Executor kicks off, and the GPU ramps up, but nothing happens.
@@richardchinnis I tried with crewai and llama3:8b locally on my computer, 15 minutes later, it still is stuck at > Entering new CrewAgentExecutor chain...
@@PrinzMegahertz i have the same output, like in the video agents going nuts repeating, any idea?
Thanks david for thw consistency and simplicity.. we still learning about agents, thanks and touch on foundation agents by nvidia dr Jim Fan
wouldve been nice to see you successfully set up and use llama3 the first time without the use of an openapi key etc
Happy you created course , hope you continue .
What the difference from CREWAI?
Bro got into a fist fight before making this video 😄
the last one with groq is using open ais gpt4 and not llama correct? or do you still needs tokens from openai to get llama working? please explain
saw that too.. he got a groc API key and suddenly it was a openai key. is that what you also understood
I would like to know your thoughts on this: With a limitation that restricts the potential of Llama materials to enhance other major language models.
Researchers and developers often want to compare or fine-tune different models to improve their performance or tailor them to specific task?
However, due to the restrictions in the licensing terms, they cannot freely utilize the Llama materials to do so unless they specifically use Llama 3.
@DavidOdrej Can you get any LLM model to understand what a magical square is, how to create it and create an working example.... ? i bet not
would really be cool how you could get agents to understand it and create a working sample.. its weird as its knows what it is, but can calculate simple.. please give it a try or anyone else ai interested
The issue with these third parties is that there is lot under the hood and you will end with a lot of api calls and billing aspect is to be considered.
Man, you’re hilarious 😂😂😂 After 10 minutes of video - we can delete all of this and follow official crewai instructions )
But anyway, thanks, video have some gems, but probably not for those who are looking for entry level video.
Nice video do you have a video where you create and use tools ?
thanks for this! currently trying to figure out what's missing with my groq connection since I'm encountering this error: openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it.
GREAT VID! FINALLY THE INSTRUCT!!
...may I just ask a sidebar question Re. Your VS Code editor window behavior? PLEASE?!!
How have you set your VS Code preferences so that the longer length strings you've written for the classifier and responder classes (specifically, the strings stored in 'goal' and 'backstory') when they reach the edge of the editor window they wrap to the next line down, WITH THE NEXT WORD CONTINUING FROM THE CORRECT INDENTATION POSITION (Directly beneath the declaration like:
To demonstrate/explain here:
|| = indicated the edge of the editors window
Your editor looks like this:
responder = Agent(
(\t) goal = "qwer||
tyabcdefghijklmnop",
)
My editor looks like this:
responder = Agent(
(\t) goal = "qwer||
tyabcdefghijklmnop", #
I was looking at the video for a 100% offline solution and then you started using OpenAI. Totally got confused. Why?
0:05 "even if you have a bad computer"
8:25 "look at the activity monitor" -> +20G in memory 😂
Big Dawg, 2 questions.
If a local ollama 3 model runs slow , can I use an API to run it faster?
If so how much is it?
Hi may I know how did you get the auto suggest prompt when you are typing your goal? Mine doesn't seem to have them :')
Please can you list me the capabilities the PC needs to run this well
Nice video 🎉
If I install just the basic Mama, can I install the 8B model over it?
for creating agent in future also do we need to understand machine learning as well plz reply
I want something to experiment with agents and get the handle of it and experiment on how much better it can help me at work, without spending any money or having particular premium keys. I've watched a lot of videos but i still don't understand what agent builders allow free to use agents, even if it's on a daily token limit.
is in the last video you changed with open ai API key that is not free right?
Looks promissing!
Can we link this with teams (microsoft )
Anyone who is not ultra-rich will never pay 77 USD just to be in your community.
It is simply insane. I suggest you take a different approach. Because it wont work.
I appreciate the value you offer with your community, but I want to be honest about my perspective. The current membership fee of 77 USD is simply too high for many, including myself.
I understand that there are costs associated with maintaining the community and providing value to the members, but I wonder if there is room for a more accessible membership fee.
A fee that is feasible for more people and enables them to participate and expand their knowledge.
Yes. Reduce the price to 5 dollars
We always have a choice - we can either stick our nose into other people's business and give unsolicited criticism, or we can start with ourselves, like earning more and not making ourselves look like a victim.
$77 is stupid. Its just greedy and ridiculous when you can go elsewhere or ask an ai
I'd pay it to be in with this crew if I could. Maybe someday but not today. Education is expensive. Life is rough, it's even rougher when your stupid. I appreciate your videos sir.
M hello I had an idea for automation with image editing software, could you tell me what you think about it
dont know pyhton anf that entire ecosystem unfortunately. can i do the same in node js?
Your API still shows BTW (though I bet you've already changed it)
👍
LOOK AT THAT SPEEEED 😀
Lama 3 versent or project test of 150 pages project docs quiz test ...how do this Ai perform?
What Did you even show I missed it😂 . You just kept saying it was broken
Thank you
Ur a legend!
why u need openai key if u using free llama?
He stored the Groq API key as an OpenAI API key variable with os.environ["OPENAI_API_KEY"], so when llm = ChatOpenAI(model = "some model") is called, it will automatically switch out "some model" against the variable defined in os.environ["OPENAI_MODEL_NAME"], which he set it to be "llama3-70-b-8192". Finally, he had to specify the url from which the model is accessed, so he set os.environ["OPENAI_API_BASE"] to some Groq related url.
@@wenhanzhou5826 thx for clarification mate
Thanks for sharing
Hey guys! I'm immersed in the study of AI agents and I'm curious: would it be viable to build an agent that prospects customers for freelance professionals? I envision a system capable of exploring Instagram in an automated way, identifying potential customers and even starting conversations to schedule sales meetings. Is it possible to develop such AI agents? If so, do you know of any videos on TH-cam or any mentors that explain how to create AI agents to automatically prospect customers through Instagram? Is there something like this in development or is this an idea for the future of AI?"
What did you achieve ?
A video pointing to his chorus
I got confused, so you are using 70b model right? 🤔
Hey brother, thank you very much for your channel. I’m a single father, I’ve been following this AI stuff closely. I’m also in school, and have so many coals in the fire it’s not even funny lol.
Thank you for your posts, because when I get a decent computer I’ll be able to quickly jump on board. I grew up very poor, my son will have a better life. I need to be on top of this. My next pay check I’ll be joining your community. Any advice on how to get my hands on a decent computer? To run this stuff? What should it have? I don’t want to miss the opportunity to provide for my son
If I bought your course, do you teach how to make sophisticated ai chatbot?
Also, I fail to understand that eventually we have to use OPENAI as well!
Almost burned down my GPU :D but thanks for tutorial
ofc solved with Groq :D enjoy everyone this great tutorial
Which are the bare Minimum specs for my hardware to be able to run this?
If you are using groq, none of the processing is done locally. So basically any hardware would do.
It's possible to create an agent that talk like a novel character? It have to talk in Italian, no English. I wanna know if your curse you explain that
I lost completely when you say: "I don't know what happened..."
I'm a 3d artist and not a programmer by any stretch of the imagination. Is there a change we could have CrewAI with a nice user interface and an installer?
10:00 thanks for being real
so you didnt get it working locally gg
Xtra Like Button
TLDR: Don't bother with this video if you need to run locally
He gets 9:38 in, can't get it working with a local Ollama model so just gives up and switches to a remote model.
Really annoying if you're coding along with the video then realise it's useless for your purposes.
I hope his premium content is better than this otherwise a bunch of people are getting taken for a ride.
i got it working with ollama run llama3
awesome thanks, I wasn't sure if I understood correctly, does downloading the local model work with crewai or only through the API?
I suggest you trying with LMStudio, cause with Ollama and crewai, it seems to be problematic.
@@mayorc sounds great thank you!
how can i build an ai agent to work on Canva?
When I use llama 3 8B on ollama or LM Studio, it is much dumber than on OpenRouter. Even after resetting all parameters to factory and loading the llama 3 preset. Even with the full non-quantized 8-bit version on LM studio.
Thanks for the video!
One question: any clue why do you set OPENAI_API_KEY with groq api key? I found it a bit confusing. Especially when using openai's API_KEY for authentication. Is OPENAI_API_KEY a placeholder in crewai for groq api key? I know a bit nonsense, so what do I missing? Thanks!
Can i make it live for others to use it?
You are not doing this in enough detail you go back and forth. Which makes it hard to follow for allot of people
I have made state of the art automation scripts for me work and i also added some stealth web scraping methods, how can i train lama model to use my coding methods??
I am looking to create a local ai tool that will help me reword and spell and grammar check in UK brittish lanagugae that will run locally Windows?
so did this end up using llama or openai at the end...?
More videos pls
i'm uploading daily brother
Question: what hardware spec do you need to run this?
to run the 8B model, you need a $1k PC or better
to run the 70B model you need a $5k PC or better
I bought the course and it's underwhelming. full of fluff. I asked for a refund and he is ignoring my messages. He's a new age scammer
Ahhhhgh I was focused on making happen until saw 40gb, blast
well when you move from local to API mid video then you make it look like a bad case of ADHD
I was going to join and then I saw it was $77!?!? Thats mad.
Soon it will be $97 ;)
Can You make some better example of this agents. Something that is really helpfull, you always say thay for questions of time you do something basic, But it will be really fascinating if you spend more time doing something that have some realistic value. Thanks
ctrl+D on all OS"s exits anything in the terminal
can we do this in nodejs?
is groq api anytime adopt command r plus?
Does anyone know if you can run this on in iPad locally and upload documents in order to answer queries? For example if you made an app for allergies on a food menu would you be able to upload ingredients of the food menu into the LLM and have it RAG answers similar to "I have a gluten allergy, can i have the Ceasar salad?"
The easiest way would be to deploy all this stuff on a server or a home PC, expose an end point, then write an ipad app to upload docs and chat with doc via your app.
can you provide a source code ?
What kind of computer can run llama3:70b locally?
My computer
@@DavidOndrej Share specs please