Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM!

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 มิ.ย. 2024
  • Interested in AI development? Then you are in the right place! Today I'm going to be showing you how to develop an advanced AI agent that uses multiple LLMs.
    If you want to land a developer job: techwithtim.net/dev
    🎞 Video Resources 🎞
    Code: github.com/techwithtim/AI-Age...
    Requirements.txt: github.com/techwithtim/AI-Age...
    Download Ollama: github.com/ollama/ollama
    Create a LlamaCloud Account to Use LLama Parse: cloud.llamaindex.ai
    Info on LLama Parse: www.llamaindex.ai/blog/introd...
    Understanding RAG: • Why Everyone is Freaki...
    ⏳ Timestamps ⏳
    00:00 | Video Overview
    00:42 | Project Demo
    03:49 | Agents & Projects
    05:44 | Installation/Setup
    09:26 | Ollama Setup
    14:18 | Loading PDF Data
    21:16 | Using llama Parse
    26:20 | Creating Tools & Agents
    32:31 | The Code Reader Tool
    38:50 | Output-Parser & Second LLM
    48:20 | Retry Handle
    50:20 | Saving To A File
    Hashtags
    #techwithtim
    #machinelearning
    #aiagents

ความคิดเห็น • 133

  • @257.4MHz
    @257.4MHz หลายเดือนก่อน +17

    You are one of the best explainers ever. Out of 50 years listening to thousands of people trying to explain thousands of things. Also, it's raining and thundering outside and I'm creating this monster, I feel like Dr. Frankenstein

    • @justcars2454
      @justcars2454 หลายเดือนก่อน +1

      50 years of listening, and learning, iam sure you have great knowlege

  • @bajerra9517
    @bajerra9517 หลายเดือนก่อน +5

    I wanted to express my gratitude for the Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM! This tutorial has been incredibly helpful in my journey to learn and apply advanced AI techniques in my projects. The clear explanations and step-by-step examples have made it easy for me to understand and implement these powerful tools. Thank you for sharing your knowledge and expertise!

  • @samliske1482
    @samliske1482 หลายเดือนก่อน +2

    You are by far my favorite tech educator on this platform. Feels like you fill in every gap left by my curriculum and inspire me to go further with my own projects. Thanks for everything!

  • @Batselot
    @Batselot หลายเดือนก่อน +9

    I was really looking forward to learn this. Thanks for the video

  • @AlexKraken
    @AlexKraken หลายเดือนก่อน +5

    If you keep getting timeout errors and happen to be using a somewhat lackluster computer like me, changing `request_timeout` in these lines
    llm = Ollama(model="mistral", request_timeout=3600.0)
    ...
    code_llm = Ollama(model="codellama", request_timeout=3600.0)
    to a larger number (3600.0 is 1 hour, but it usually takes only 10 minutes) helped me out. Thanks for the tutorial!

  • @techgiant__
    @techgiant__ หลายเดือนก่อน +4

    Just used your code with llama 3, and made the code generator a function tool, and it was fvcking awesome. Thanks for sharing👍🏻

  • @briancoalson
    @briancoalson หลายเดือนก่อน +6

    Some helpful things when going through this:
    - Your Python version needs to be

    • @mikewebb3855
      @mikewebb3855 หลายเดือนก่อน

      For me, once installed xcode, rerun installing the package and was able to get llama_cp_python wheel to install. Thanks for this note, helped make sense of the error message.

    • @dearadulthoodhopeicantrust6155
      @dearadulthoodhopeicantrust6155 25 วันที่ผ่านมา

      Yup. I encountered this on windows. On my Visual studio I used the ctrl + shift + p opens a search bar. I searched for interpreter and then I was able to access previous versions of python in different environments, I selected Conda environment and opened a new Terminal. I checked python --version and then the selected python version was up.

  • @ChadHuffman
    @ChadHuffman หลายเดือนก่อน +1

    Amazing as always, Tim. Thanks for spending the time to walk through this great set of tools. I'm looking forward to trying this out with data tables and PDF articles on parsing these particular data sets to see what comes out the other side. If you want to take this in a different direction, I'd love to see how you would take PDFs on how different parts of a system work and their troubleshooting methodology and then throw functional data at the LLM with errors you might see. I suspect (like other paid LLMs) it could draw some solid conclusions. Cheers!

  • @ft4jemc
    @ft4jemc หลายเดือนก่อน +8

    Great video. Would really like to see methods that didn't involve reaching out to the cloud but keeping everything local.

  • @beautybarconn
    @beautybarconn 25 วันที่ผ่านมา +2

    No idea what’s going on but I love falling asleep to these videos 😊

  • @ravi1341975
    @ravi1341975 หลายเดือนก่อน +2

    wow this is absolutely mind blowing ,thanks Tim.

  • @seanbergman8927
    @seanbergman8927 หลายเดือนก่อน +1

    Excellent demo! I liked seeing it built in vs code with loops, unlike many demos that are in Jupyter notebooks and can’t run this way.
    Regarding more demos like this…Yes!! Most definitely could learn a lot from more and more advanced LlamaIndex agent demos. Would be great to see a demo that uses their chat agent and maintain chat state for follow-up questions. Even more advanced and awesome would be an example where the agent will ask a follow up question if it needs more information to complete a task.

  • @jorgitozor
    @jorgitozor หลายเดือนก่อน

    This is very clear and very instructive, so much valuable information! Thanks for your work

  • @garybpt
    @garybpt หลายเดือนก่อน

    This was fascinating, I'm definitely going to be giving it a whirl! I'd love to learn how something like this could be adapted to write articles using information from our own files.

  • @vaughanjackson2262
    @vaughanjackson2262 หลายเดือนก่อน +3

    Great vid.. only issue is the fact that the parsing is done externally. For RAG's ingesting sensitive data this would be a major issue.

  • @davidtindell950
    @davidtindell950 23 วันที่ผ่านมา +1

    Thank You for this very informative video. I really like the capabilities of 'LlamaIndex' with PDF's.
    I used it to process several of my own medium-size PDF's and it was very quick and correct.
    It would be great to have another vid on how to save and reuse the VectorStore for queries
    against PDF's already processed. To me this is more important even than the code generation.

  • @AaronGayah-dr8lu
    @AaronGayah-dr8lu 11 วันที่ผ่านมา

    This was brilliant, thank you.

  • @camaycama7479
    @camaycama7479 หลายเดือนก่อน

    Awesome video, man thx a big bunch!

  • @tomasemilio
    @tomasemilio 27 วันที่ผ่านมา

    Bro your videos are gold.

  • @samwamae6498
    @samwamae6498 หลายเดือนก่อน +1

    Awesome 💯

  • @ChathurangaBW
    @ChathurangaBW 20 วันที่ผ่านมา

    just awesome !

  • @nour.mokrani
    @nour.mokrani หลายเดือนก่อน +2

    Thanks for this tutorial and your way of explaining, I've been looking for this ,
    Can you also make a vid on how to build enterprise grade generative ai with Nemo Nvidia that would be so interesting, thanks again

  • @equious8413
    @equious8413 26 วันที่ผ่านมา

    "If I fix these up." My god, Tim. You know that won't scale.

  • @seanh1591
    @seanh1591 หลายเดือนก่อน +2

    Tim - thanks for the wonderful video. Very well done sir!! Is there an alternative to LlamaParse to keep the parsing local?

  • @purvislewies3118
    @purvislewies3118 18 วันที่ผ่านมา

    yes man...this what i want to do and more...

  • @Ari-pq4db
    @Ari-pq4db หลายเดือนก่อน

    Nice ❤

  • @SashoSuper
    @SashoSuper หลายเดือนก่อน

    Nice one

  • @kayoutube690
    @kayoutube690 22 วันที่ผ่านมา

    New subscriber here!!!

  • @Pushedrabbit699-lk6cr
    @Pushedrabbit699-lk6cr หลายเดือนก่อน +3

    Could you also do a video on infinite world generation using chunks for RPG type pygame games?

  • @billturner2112
    @billturner2112 หลายเดือนก่อน

    I liked this. Out of curiosity, why venv rather than Conda?

  • @ricardokullock2535
    @ricardokullock2535 21 วันที่ผ่านมา

    The guys at llmware have some fone-tuned models for RAG and some for function calling (outputing structured data). Could be interesting to try out with this.

  • @henrylam4934
    @henrylam4934 หลายเดือนก่อน

    Thanks for the tutorial. Is there any alternate to LlamaParse that allows me to run the application completely local?

  • @mredmister3014
    @mredmister3014 15 วันที่ผ่านมา

    Good video but do you have a complete ai agent with your own data without the coding formatting? This is the closest tutorial I’ve found to do on premise ai agent implementation that I can understand. Thanks!

  • @nikta456
    @nikta456 หลายเดือนก่อน

    Please create a video about production-ready AI agents!

  • @blissfulDew
    @blissfulDew หลายเดือนก่อน +1

    Thanks for this!! Unfortunately I can't run it on my laptop, it takes forever and the AI seems confused. I guess it needs powerful machine...

  • @robertwclayton6962
    @robertwclayton6962 หลายเดือนก่อน

    Great video tutorial! Thanks 🙌
    (liked and subscribed, lol)
    A bit of a "noob" developer here, so vids like this really help.
    I know it's a lot to ask, but....
    I was wondering if you might consider showing us how to build a more modular app, where we have separate `.py` files to ingest and embed our docs, then another to create and/or add embeddings to a vector DB (like Chroma), then another for querying the DB. Would this be possible?
    It would be nice to know how to have data from one Python file feed data to another, while also minimizing redundancy (e.g., IF `chroma_db` already exists, the `query.py` file will know to load the db and query with LlamaIndex accordingly)
    Even better if you can show us how make our `query_engine` remember users' prior prompts (during a single session).
    Super BONUS POINTS if you can show us how to then feed the `query.py` data into a front-end interface for an interactive chat with a nice UI.
    Phew! That was a lot 😂

  • @JNET_Reloaded
    @JNET_Reloaded หลายเดือนก่อน

    nice

  • @camaycama7479
    @camaycama7479 หลายเดือนก่อน

    Does the mistral large will be available ? I'm wondering if the LLM availability will be up to date or there's other step to do.

  • @sethngetich4144
    @sethngetich4144 หลายเดือนก่อน +5

    I keep getting errors when trying to install the dependencies from requirements.txt

    • @I2ealTuber
      @I2ealTuber 28 วันที่ผ่านมา +1

      Make sure you have the correct version of python

    • @user-zq2nr2sp7o
      @user-zq2nr2sp7o 3 วันที่ผ่านมา

      or better since I prefer he pip install them manually

  • @meeFaizul
    @meeFaizul หลายเดือนก่อน

    ❤❤❤❤❤❤

  • @willlywillly
    @willlywillly หลายเดือนก่อน

    Another great tutorial... Thank You! How do I get in touch with you Tim for consultant?

    • @TechWithTim
      @TechWithTim  หลายเดือนก่อน +2

      Send an email to the email listed on my about page on youtube

  • @kodiak809
    @kodiak809 หลายเดือนก่อน

    so ollama is run locally in your machine? can i make it cloud based by applying it into my backend?

  • @_HodBuri_
    @_HodBuri_ หลายเดือนก่อน +3

    Error 404 not found - local host - api - chat [FIX]
    If anyone else gets an error like that when trying to run the llamacode agent, just run the llamacode llm in terminal to download it, as it did not download it automatically for me at least as he said around 29:11
    So similar to what he showed at the start with Mistral:
    ollama run mistral.
    You can run this in a new terminal to download codellama:
    ollama run codellama

  • @Pyth_onist
    @Pyth_onist หลายเดือนก่อน

    I did one using Llama2.

    • @giovannip.6473
      @giovannip.6473 หลายเดือนก่อน

      are you sharing it somewhere?

  • @unflappableunflappable1248
    @unflappableunflappable1248 หลายเดือนก่อน

    круто

  • @jay.hiraya
    @jay.hiraya หลายเดือนก่อน

    what keyboard are you using? 😊

  • @adilzahir9921
    @adilzahir9921 หลายเดือนก่อน

    Can i use that to make ai agent that can call customers and interact with them and take notes of what's happens ? Thank's

  • @WismutHansen
    @WismutHansen หลายเดือนก่อน +2

    You obviously went to the Matthew Berman School of I'll revoke this API Key before publishing this video!

  • @mayerxc
    @mayerxc หลายเดือนก่อน +6

    What are your MacBook Pro specs? I'm looking for a new computer to run llm locally.

    • @techgiant__
      @techgiant__ หลายเดือนก่อน +6

      Buy a workstation with very good Nvidia gpu, so u can use cuda. If u still want to go for a MacBook Pro, get the M2 with 32gb or 64gb ram. I’m using a MacBook m1 16” 16gb ram and I can only run llms with 7 - 13b without crashing it

    • @TechWithTim
      @TechWithTim  หลายเดือนก่อน +2

      I have an M2 Max

    • @GiustinoEsposito98
      @GiustinoEsposito98 หลายเดือนก่อน

      Have you ever thought about using colab as a remote webserver with local llm such as llama3 and calling it from your pc to get predictions? I have your same problem and was thinking about solving like this

    • @iamderrickfoo
      @iamderrickfoo 14 วันที่ผ่านมา

      My mbp pro m1 8gb is hanging while running the llm locally. Any alternatives that we can learn to build without killing my mbp?

  • @anandvishwakarma933
    @anandvishwakarma933 หลายเดือนก่อน

    Hey can you shared the system configuration need to run this application ?

  • @hamsehassan7304
    @hamsehassan7304 23 วันที่ผ่านมา

    everytime i try to install the requirements.txt files, it only downloads some of the content but then i get this error message: Requires-Python >=3.8.1, im runnning this on a mac with python version 3.12.3 and I can't seem to download the older version of python.

  • @bigbena23
    @bigbena23 หลายเดือนก่อน

    What if I don't my data to be manipulated in the cloud? Is there an alternative for LlamaParser that can be ran locally?

  • @Marven2
    @Marven2 หลายเดือนก่อน

    Can you make a series

  • @RolandDewonou
    @RolandDewonou หลายเดือนก่อน

    it seems multiple elements in the requirements.txt doc require different versions of python and other libraries. Could you clarify what versions what what is needed in order for this to work?

  • @vedantbande5682
    @vedantbande5682 23 วันที่ผ่านมา

    how to know the requirements.txt dependencies we required (it is a large list)

  • @avxqt966
    @avxqt966 หลายเดือนก่อน +1

    I can't install packages of llama-index in my Windows system. Also, the 'guidance' package is showing an error

  • @danyloustymenko7465
    @danyloustymenko7465 หลายเดือนก่อน

    What's the latency of models running locally?

  • @Czarlsen
    @Czarlsen 25 วันที่ผ่านมา

    Is there much difference between result_type = "Markdown" and result_type = "text"?

  • @DomenicoDiFina
    @DomenicoDiFina หลายเดือนก่อน

    Is it possible to create an agent using other languages?

  • @JRis44
    @JRis44 17 วันที่ผ่านมา

    Dang seems im stuck with a 404 message @ 31:57.
    Anyone else have that issue? Or have a fix for it possibly? Maybe the dependencies need an update already?

  • @user-zx9pz3dn8b
    @user-zx9pz3dn8b หลายเดือนก่อน

    Why did I need to downgrade python 3.12 to 11 to be able to run requirements.txt which some dependencies were calling to use a version less than 3.12 but I see you are using python 3 with no errors?

  • @dolapoadefisayomioluwole1341
    @dolapoadefisayomioluwole1341 หลายเดือนก่อน

    First to comment today 😂

  • @AndyPandy-ni1io
    @AndyPandy-ni1io 10 ชั่วโมงที่ผ่านมา

    what am i doing wrong cause when I run it does not work no matter what I try

  • @Darkvader9427
    @Darkvader9427 23 วันที่ผ่านมา

    can i do the same using langchain

  • @radheyakhade9853
    @radheyakhade9853 หลายเดือนก่อน

    Can anyone tell me what basic things should one know before going into this video ??

  • @amruts4640
    @amruts4640 หลายเดือนก่อน

    Can you please do a video about making a gui in python

  • @levinkrieger8452
    @levinkrieger8452 หลายเดือนก่อน

    First

  • @notaras1985
    @notaras1985 หลายเดือนก่อน

    How do we know that Meta hasn't corrupted the ollama model with spyware or other malicious code?

  • @ofeksh
    @ofeksh หลายเดือนก่อน

    Hi Tim!
    GREAT JOB on pretty much everything!
    BUT, i have a problem
    im running on windows with pycharm and it shows me an error when installing the requirements,
    because its pycharm, i have 2 options for installing the requirements, one from within pycharm and one from the terminal
    FIRST ERROR (when i install through pycharm)
    in both options im seeing an error (similar one, but not exactly the same)
    can you please help me with it?

    • @diegoromo4819
      @diegoromo4819 หลายเดือนก่อน +1

      you can check which python version you have installed.

    • @ofeksh
      @ofeksh หลายเดือนก่อน

      @@diegoromo4819hey, thank you for your response, which version should i have? i can't find it in the video.

    • @neilpayne8244
      @neilpayne8244 หลายเดือนก่อน +1

      @@ofeksh 3.11

    • @ofeksh
      @ofeksh หลายเดือนก่อน

      @@neilpayne8244 shit, that's my version...

  • @technobabble77
    @technobabble77 หลายเดือนก่อน

    I'm getting the following when I run the prompt:
    Error occured, retry #1: timed out
    Error occured, retry #2: timed out
    Error occured, retry #3: timed out
    Unable to process request, try again...
    What is this timing out on?

    • @coconut_bliss5539
      @coconut_bliss5539 หลายเดือนก่อน

      Your Agent is unable to reach your Ollama server. It's repeatedly trying to query your Ollama server's API on localhost, then those requests are timing out. Check if your Ollama LLM is initializing correctly. Also make sure your Agent constructor contains the correct LLM argument.

    • @TballaJones
      @TballaJones หลายเดือนก่อน

      Do you have a VPN like NordVPN running? Sometimes that can't mess up local servers

  • @joshuaarinaitwe8351
    @joshuaarinaitwe8351 หลายเดือนก่อน

    Hey tim. Great video. I have been watching your videos for some time, though i was definitely young then. I need some guidance. Am 17, i want to do ai and machine learning course. Somebody advise me.

  • @nikta456
    @nikta456 28 วันที่ผ่านมา

    Problems ?
    # make sure the LLM is listening
    `pip install llama-index qdrant_client torch transformers` `pip install llama-index-llms-ollama`
    # didn`t download codellama
    `ollama pull codellama`
    # timeout error
    set request_timeout to 500.

  • @adilzahir9921
    @adilzahir9921 หลายเดือนก่อน

    What the minimum laptop to run this model ? Thank's

    • @samohtGTO
      @samohtGTO หลายเดือนก่อน +1

      you need a good gpu to run like litrarly any llm

  • @Aiden-rz6vf
    @Aiden-rz6vf หลายเดือนก่อน

    Llama 3

  • @257.4MHz
    @257.4MHz หลายเดือนก่อน +1

    Well, I can't get it to work. It gives 404 on /api/chat

    • @omkarkakade3438
      @omkarkakade3438 หลายเดือนก่อน

      I am getting the same error

    • @mrarm4x
      @mrarm4x หลายเดือนก่อน +2

      you are probably getting this error because you are missing the codellama model, run ollama pull codellama and it should fix it

  • @ases4320
    @ases4320 หลายเดือนก่อน

    But this is not completely "local" since you need an api key, no?

    • @matteominellono
      @matteominellono หลายเดือนก่อน

      These APIs are used within the same environment or system, enabling different software components or applications to communicate with each other locally without the need to go through a network.
      This is common in software libraries, operating systems, or applications where different modules or plugins need to interact.
      Local APIs are accessed directly by the program without the latency or the overhead associated with network communications.

  • @dr_harrington
    @dr_harrington หลายเดือนก่อน

    DEAL BREAKER:
    17:20 "What this will do is actually take our documents and push them out to the cloud."

  • @Meir-ld2yi
    @Meir-ld2yi หลายเดือนก่อน

    ollama mistral work so slowly that even hello take like 20 min

  • @neiladriangomez
    @neiladriangomez หลายเดือนก่อน

    I’ll come back to this in a couple of months. Too advance for me, my head is spinning I cannot grasp a single info😵‍💫

    • @TechWithTim
      @TechWithTim  หลายเดือนก่อน +2

      Haha no problem! I have some easier ones on the channel

    • @cocgamingstar6990
      @cocgamingstar6990 หลายเดือนก่อน

      Me too😅

    • @alantripp6175
      @alantripp6175 หลายเดือนก่อน

      I can't figure out which AI agent vendor is open for me to sign up to use.

  • @dezly-macauley
    @dezly-macauley หลายเดือนก่อน

    I want to learn how to make an AI agent that auto-removes / auto-deletes these annoying spam s3x bot comments on useful TH-cam videos like this.

  • @kazmi401
    @kazmi401 หลายเดือนก่อน

    Why youtube does not add my comment. F*CK

  • @NathanChambers
    @NathanChambers หลายเดือนก่อน

    Using a module that requires you to upload the files or data (LlamaParse/LlamaCloud) totally defeats the purpose of self hosting your on LLM models... Dislike just for that!
    it makes as little sense as putting your decentralized currency in a centralized bank. LLAL

    • @skyamar
      @skyamar หลายเดือนก่อน

      stupid orc

    • @iva1389
      @iva1389 หลายเดือนก่อน +2

      How is that an issue? You want to have the ability to parse the files to the model. Are you sure you've grasped the concept of agents and tools? The whole point is have RAG locally.
      Decentralized comparison is simply unrelated to what has been done here.

    • @NathanChambers
      @NathanChambers หลายเดือนก่อน

      @@iva1389 It is the same thing being done. You're taking something that allows you/your business to do things on their own without third party... but adding 3rd party for no reason. 3rd party where your data can be hacked/stolen/man-in-the-middle attacked. So the comparison IS VALID!

    • @NathanChambers
      @NathanChambers หลายเดือนก่อน +1

      @@iva1389 The whole point of things like ollama and LLMs is to keep things IN-HOUSE. Doing 3rd party defeats the purpose of using these models. Same things as putting decentralized money in central banks. So they really are the same type of stupid thing to do!
      It's like saying cocaine is bad for you, but let's go do some crack. :P

    • @TechWithTim
      @TechWithTim  หลายเดือนก่อน +3

      Then simply don’t use it and use the local loading instead. I’m just showing a great option that works incredibly well, you can obviously tweak this and thats the idea.

  • @AndyPandy-ni1io
    @AndyPandy-ni1io 10 ชั่วโมงที่ผ่านมา

    from sentence_transformers import SentenceTransformer
    model = SentenceTransformer("BAAI/bge-m3")
    from llama_index import download_loader
    download_loader("LocalDiskVectorStore")().persist(persist_dir="./storage")

  • @jaivalani4609
    @jaivalani4609 หลายเดือนก่อน

    Hi tim.its really.simple 2 understand
    One ask is.llama.parse free to use ? Or does it needs subscription key ?

    • @jaivalani4609
      @jaivalani4609 หลายเดือนก่อน

      Can we use Lama parse locally ?

    • @TechWithTim
      @TechWithTim  หลายเดือนก่อน +1

      It’s free to use!

    • @jaivalani4609
      @jaivalani4609 หลายเดือนก่อน

      @@TechWithTim Thanks, but does it requires data to be sent to Cloud?

    • @samohtGTO
      @samohtGTO หลายเดือนก่อน

      @@jaivalani4609 it does send it to the cloud and you can do 1000 pages each day with free. it will send the file to the cloud and gets the markdownfile from it

  • @maximelhuillier8964
    @maximelhuillier8964 26 วันที่ผ่านมา

    i have this error message : [WinError 126] Le module spécifié est introuvable. Error loadin \AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\lib\shm.dll" or one of its dependencies. Can you help me ?

    • @donaldhawkins6610
      @donaldhawkins6610 11 วันที่ผ่านมา

      This is a bug and should be fixed with pytorch >= 2.3.1. If pytorch is version 2.3.0 in requirements.txt, change it to 2.3.1 or a newer release if another one is already out

  • @norminemralino2260
    @norminemralino2260 หลายเดือนก่อน

    I get an error when trying to parse readme.pdf:
    Error while parsing the file '/Users/.../AI-Agent-Code-Generator/data/readme.pdf': Illegal header value b'Bearer '
    Failed to load file /Users/.../AI-Agent-Code-Generator/data/readme.pdf with error: Illegal header value b'Bearer '. Skipping...
    Any clue to what might be happening?

    • @norminemralino2260
      @norminemralino2260 หลายเดือนก่อน

      I'm pretty sure if has something to do with LlamaParse(). I can't seem to reach LlamaCloud using my API. I copied and pasted it into the .env file

    • @norminemralino2260
      @norminemralino2260 หลายเดือนก่อน

      Not sure why load_dotenv() doesn't work for me. I was able to set it using os.environ['LLAMA_CLOUD_API_KEY']