How To Connect Llama3 to CrewAI [Groq + Ollama]

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 พ.ค. 2024
  • 🤖 Download the Source Code Here:
    brandonhancock.io/llama3-crewai
    Don't forget to Like and Subscribe if you're a fan of free source code 😉
    📆 Need help with CrewAI, join our free Skool Community:
    skool.com/ai-developer-accele...
    This video is perfect for anyone eager to run Llama3 locally on their computer and in the cloud using Grog. We cover what Llama3 is and how it compares to other LLMs. Additionally, we explore how to connect Llama3 to CrewAI. The majority of the video is spent on building an Instagram posting crew that generates image descriptions and text for Instagram using Llama3. By the end of this tutorial, you'll know how to set up, customize, and use Llama3 to automate tasks and enhance your project's capabilities. Get ready to upgrade your tech skills and make your work with AI more productive and innovative. Start now and see how simple it is to bring the power of Llama3 into CrewAI.
    📰 Stay updated with my latest projects and insights:
    LinkedIn: / brandon-hancock-ai
    Twitter: / bhancock_ai
    Resources:
    - CrewAI Crash Course - • CrewAI Tutorial: Compl...
    - Updated CrewAI Tutorial - • CrewAI Tutorial for Be...
    - How To Connect Local LLMs to CrewAI [Ollama, Llama2, Mistral] - • How To Connect Local L...
    - ollama: ollama.com/
    - Llama 3 ai.meta.com/blog/meta-llama-3/
    - Configure LLMs for CrewAI - docs.crewai.com/how-to/LLM-Co...
    - Instagram crew example: github.com/joaomdmoura/crewAI...
    Timestamps:
    00:00 Introduction
    00:12 Video Overview
    02:44 Llama 3 Overview, Comparison, & Testing
    07:06 Setup Llama3 Locally with Ollama
    12:05 Crew Overview
    13:20 Run CrewAI & Llama 3 Locally with Ollama & Crew Deep Dive
    22:18 Run CrewAI & Llama 3 with Groq
    27:39 Fix Rate Limiting with Groq
    29:27 Results
    31:01 Outro
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 51

  • @MariodeFelipe
    @MariodeFelipe 21 วันที่ผ่านมา +2

    The quality is 10/10 thanks mate

    • @bhancock_ai
      @bhancock_ai  17 วันที่ผ่านมา

      Thank you Mario!

  • @GregPeters1
    @GregPeters1 22 วันที่ผ่านมา +3

    Hey Brandon, welcome back after your vacay!

    • @bhancock_ai
      @bhancock_ai  21 วันที่ผ่านมา +2

      Feels good to be back! I'm recharged and ready to go!

  • @theBookofIsaiah33ad
    @theBookofIsaiah33ad 19 วันที่ผ่านมา +1

    Man, I do not know how to create and write code but you have made a video and I think I can do this! Bless you my friend!

    • @bhancock_ai
      @bhancock_ai  17 วันที่ผ่านมา

      Thank you! I'm confident you can do it! Let me know if you nee help with anything!

  • @tapos999
    @tapos999 16 วันที่ผ่านมา

    thanks! Your crewai tutorial are top-of-the-shelf stuff. do you have any crewai proejct with streamlit connected to show output on the ui? thanks

  • @CodeSnap01
    @CodeSnap01 21 วันที่ผ่านมา

    refereshed after short vacation.. hope to see you frequently

  • @d.d.z.
    @d.d.z. 18 วันที่ผ่านมา +3

    Friendly commment: You look better with glasses, more professional. Great content.

    • @bhancock_ai
      @bhancock_ai  17 วันที่ผ่านมา

      Hey D! Thanks!
      I love wearing glasses and hate my contacts so I think I might need to go full glasses mode 🤓

  • @reidelliot1972
    @reidelliot1972 20 วันที่ผ่านมา

    Great content as always! Do you know if it's sustainable to use a single groqcloud API key to host LLM access for a multi-user app? Or would a service like AWS Sagemaker be better for simultaneous users?
    Cheers!

  • @nathankasa6220
    @nathankasa6220 21 วันที่ผ่านมา +1

    Thanks! Is Claude 3 opus still not supported though? How come?

  • @Omobilo
    @Omobilo 16 วันที่ผ่านมา

    Great stuff. Maybe a silly question, but when it was fetching to read data from remote website (the analysis part), does it read it remotely OR does it capture screenshots & download text to feed into its prompt and then clear this cached data or such local cached data needs to be cleaned eventually? Hope it simply reads remotely without too much data saved locally as I plan to use this approach to analyze many websites without flooding my local storage.

  • @thefutureisbright
    @thefutureisbright 22 วันที่ผ่านมา +1

    Brandon excellent tutorial 👍

    • @bhancock_ai
      @bhancock_ai  21 วันที่ผ่านมา

      Thanks man! I really appreciate it!

  • @protovici1476
    @protovici1476 21 วันที่ผ่านมา +3

    Excellent video! Would be interesting to see these frameworks, but within LightningAI Studios. Also, I saw CrewAI will be having a more golden standard approach to their code structuring in the near future.

    • @bhancock_ai
      @bhancock_ai  21 วันที่ผ่านมา +3

      Thank you! And you're definitely right about CrewAI moving towards YAML. When CrewAI+ drops, I plan on making a lot more content around this new format for you guys!
      And, I haven't tried out LightningAI Studio yet so I'll definitely have to try it out this weekend. Thanks for the suggestion!

    • @protovici1476
      @protovici1476 21 วันที่ผ่านมา

      @@bhancock_ai Great! I like the YAML approach. William Falcon that started LightningAI (PyTorch Lightning) likes my posts on LinkedIn as I'm a huge fan in developing with it when I mention them. Will be studying your approach with the latest updates and hopefully with their Studio. Thanks!!

  • @Ryan.Youtube
    @Ryan.Youtube 21 วันที่ผ่านมา +1

    This is awesome! 😎

    • @bhancock_ai
      @bhancock_ai  21 วันที่ผ่านมา

      Thanks! 😁

  • @ag36015
    @ag36015 17 วันที่ผ่านมา

    What would you say are the minimum hardware requirements to make it run smoothly?

  • @am0x01
    @am0x01 14 วันที่ผ่านมา

    Appreciate your support (with those contents), the only drawback, was the need to subscribe to get access to a project that isn't yours. 😞

  • @clinton2312
    @clinton2312 20 วันที่ผ่านมา +1

    Thank you :)

  • @bennie_pie
    @bennie_pie 18 วันที่ผ่านมา

    Thank you for this and for the code.. How does Llama 3 compare to Dolphin-Mistral 2.8 running locally as the more junior agents do you know? Dolphin-Mistral with its extra conversatuon/coding training and bigger 32k context window appeals! Ive had agents go round in circles creating nonsense with other frameworks as they dont remember what they are supposed to do! A big context window defo could bring some benefits! I try and avoid using GPT3.5 or 4 for coding preferring for this reason. Id then like to use Claude 3 Opus with his 200k context window and extra capability for the heavy liftin and oversight!

  • @mikesara7032
    @mikesara7032 22 วันที่ผ่านมา +1

    your awesome, thank you!

    • @bhancock_ai
      @bhancock_ai  22 วันที่ผ่านมา +1

      Thanks Mike! You're awesome too!

  • @jarad4621
    @jarad4621 20 วันที่ผ่านมา

    Hi Brandon, the groq rate limit is a big issue for my use case, can i use this same method to use another similar hosted llama 3 70b with crewai like openrouter api or can any api be used instead of groq with your method?

    • @jarad4621
      @jarad4621 20 วันที่ผ่านมา

      Oh i see it has to be an api alreadyy supported by langchain correct or it wont work?

  • @shuntera
    @shuntera 21 วันที่ผ่านมา +1

    With both the Groq 8b and 70b with crew max_rpm set at both 1 or 2 I do get it halting for a while with:
    [INFO]: Max RPM reached, waiting for next minute to start.

    • @bhancock_ai
      @bhancock_ai  21 วันที่ผ่านมา

      The problem is that Groq is so fast that it ends up processing too many tokens so it ends up hitting a rate limit and failing.
      To get around that, we have to slow down our crew by setting the max RPM.
      Feel free to bump it up to get your crew to move faster!

  • @jalapenos12
    @jalapenos12 17 วันที่ผ่านมา +1

    Just curious why VSCode doesn't display file types on Mac. I'm going bonkers trying to figure out what to save the Modelfile as.

    • @bhancock_ai
      @bhancock_ai  17 วันที่ผ่านมา

      Hey! There actually isn't a file type for that file. You can just leave it how it is. Hope that helps!

    • @jalapenos12
      @jalapenos12 13 วันที่ผ่านมา

      @@bhancock_ai Thanks for the quick response. I figured out that ".txt" works for those of us in other operating systems.

  • @markdkberry
    @markdkberry 10 วันที่ผ่านมา

    perfect run with Groq. I get great speeds on my PC with Local LLAMA3, but nothing I can do stops it throwing errors. I find that before with other projects thgat a lot of local LLMs have odd issues that change each run from failing to run functions as they change the name to just saying they cant pass tool info. Must be something in the local limitations either with Ollama or CrewAi.

  • @shuntera
    @shuntera 20 วันที่ผ่านมา

    That is using a very old version of CrewAI - if you run it with the current version of CrewAI it fails because of lack of expected_output parameter in the Tasks

  • @ashwinsveta
    @ashwinsveta 12 วันที่ผ่านมา

    I get a error, then I noticed that we need search api, I added that but still get the error
    pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
    expected_output
    Field required [type=missing, input_value={'description': "Analyze ...e business landscapes.)}, input_type=dict]

  • @rauljauregi6615
    @rauljauregi6615 21 วันที่ผ่านมา

    Nice! 😃

  • @jarad4621
    @jarad4621 17 วันที่ผ่านมา

    Please for the love of god somebody explain to me why we are using Ollama to download local models and then using Groq anyway to run the model in the cloud. Why can't we just skip the ollama part? I beg you i see all the videos using Ollama with Groq and i don't understand the aspect! thank you. Does ollama do something special to make it work better for crewai then a direct Groq connect?

  • @magnuscarlsson5067
    @magnuscarlsson5067 21 วันที่ผ่านมา

    What graphic card do you use on your computer when running local with Ollama?

    • @krysc4d
      @krysc4d 20 วันที่ผ่านมา

      The key is VRAM. I can run smoothly llama 3 70b on RTX3090 hiting about 16GB of VRam (if remember correctly)

  • @pratyushsrivastava6646
    @pratyushsrivastava6646 21 วันที่ผ่านมา +2

    Hello sir
    Nice content

    • @bhancock_ai
      @bhancock_ai  21 วันที่ผ่านมา +2

      Thanks Pratyush! I really appreciate it!

    • @pratyushsrivastava6646
      @pratyushsrivastava6646 21 วันที่ผ่านมา

      @@bhancock_ai how can I connect with you

  • @ryana2952
    @ryana2952 5 วันที่ผ่านมา

    Is there an easy way to build No Code AI Assistants or Agents with Groq? I know zero code

  • @ZombieGamerRealm
    @ZombieGamerRealm 10 วันที่ผ่านมา

    gyus do u know any way to run crewai and\or llama on gpu? only CPU is soooooooooooooooooooooooo sloooooooooooooow

    • @ArseniyPotapov
      @ArseniyPotapov 6 วันที่ผ่านมา

      llama_cpp (what ollama is based on) or vllm

  • @deadbody408
    @deadbody408 21 วันที่ผ่านมา +1

    might want to revoke those keys you revealed if you haven't

  • @miaohf
    @miaohf 19 วันที่ผ่านมา

    Very good video demonstration. I noticed that you chose to use serper search in the video. I would like to know the difference between serper and duckduckgo search and how to choose between them. If you know, please introduce it to me. Thank you.

  • @raghuls1469
    @raghuls1469 14 วันที่ผ่านมา

    Hello Brandon thanks for the awesome video, I was trying to do the same setup with crew AI, but I am getting an error while running, I added the error message below
    Traceback (most recent call last):
    File "D:\crew_ai\crew.py", line 114, in
    result = crew.kickoff()
    ^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\crew.py", line 252, in kickoff
    result = self._run_sequential_process()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\crew.py", line 293, in _run_sequential_process
    output = task.execute(context=task_output)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\task.py", line 173, in execute
    result = self._execute(
    ^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\task.py", line 182, in _execute
    result = agent.execute_task(
    ^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\agent.py", line 207, in execute_task
    memory = contextual_memory.build_context_for_task(task, context)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\contextual\contextual_memory.py", line 22, in build_context_for_task
    context.append(self._fetch_stm_context(query))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\contextual\contextual_memory.py", line 31, in _fetch_stm_context
    stm_results = self.stm.search(query)
    ^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\short_term\short_term_memory.py", line 23, in search
    return self.storage.search(query=query, score_threshold=score_threshold)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\crewai\memory\storage
    ag_storage.py", line 90, in search
    else self.app.search(query, limit)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\embedchain\embedchain.py", line 635, in search
    return [{"context": c[0], "metadata": c[1]} for c in self.db.query(**params)]
    ^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\embedchain\vectordb\chroma.py", line 220, in query
    result = self.collection.query(
    ^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\api\models\Collection.py", line 327, in query
    valid_query_embeddings = self._embed(input=valid_query_texts)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\api\models\Collection.py", line 633, in _embed
    return self._embedding_function(input=input)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\api\types.py", line 193, in __call__
    result = call(self, input)
    ^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\chromadb\utils\embedding_functions.py", line 188, in __call__
    embeddings = self._client.create(
    ^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai
    esources\embeddings.py", line 113, in create
    return self._post(
    ^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai\_base_client.py", line 1232, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai\_base_client.py", line 921, in request
    return self._request(
    ^^^^^^^^^^^^^^
    File "D:\crew_ai\.my_crew_env\Lib\site-packages\openai\_base_client.py", line 1012, in _request
    raise self._make_status_error_from_response(err.response) from None
    openai.NotFoundError: 404 page not found