Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM!

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 พ.ย. 2024

ความคิดเห็น • 177

  • @joe_hoeller_chicago
    @joe_hoeller_chicago 3 วันที่ผ่านมา

    I like Tim. Tim explains the important things concisely without diving into rabbit holes. Tim gets straight to the point without loud obnoxious music using code. Tim is an expert. Be like Tim.

  • @257.4MHz
    @257.4MHz 7 หลายเดือนก่อน +34

    You are one of the best explainers ever. Out of 50 years listening to thousands of people trying to explain thousands of things. Also, it's raining and thundering outside and I'm creating this monster, I feel like Dr. Frankenstein

    • @justcars2454
      @justcars2454 7 หลายเดือนก่อน +2

      50 years of listening, and learning, iam sure you have great knowlege

    • @Davichius
      @Davichius 3 หลายเดือนก่อน

      Best comment ever 👌 😅

    • @aga1nstall0dds
      @aga1nstall0dds หลายเดือนก่อน

      Uve been studying ai for 50 years ?!?

    • @rembautimes8808
      @rembautimes8808 21 วันที่ผ่านมา

      Agree been watching a lot of Tim videos 😂

  • @AlexKraken
    @AlexKraken 6 หลายเดือนก่อน +12

    If you keep getting timeout errors and happen to be using a somewhat lackluster computer like me, changing `request_timeout` in these lines
    llm = Ollama(model="mistral", request_timeout=3600.0)
    ...
    code_llm = Ollama(model="codellama", request_timeout=3600.0)
    to a larger number (3600.0 is 1 hour, but it usually takes only 10 minutes) helped me out. Thanks for the tutorial!

    • @ricardopata8846
      @ricardopata8846 6 หลายเดือนก่อน +3

      thanks mate!

    • @jeenathkumar3291
      @jeenathkumar3291 4 หลายเดือนก่อน +1

      thanks @alexkraken

    • @feirahisham4033
      @feirahisham4033 2 หลายเดือนก่อน +1

      thank you for your comment! this really helps me! I've been stuck for few hours! thanks!!!

    • @CodingWithStella
      @CodingWithStella หลายเดือนก่อน

      thank you so much.

  • @bajerra9517
    @bajerra9517 7 หลายเดือนก่อน +13

    I wanted to express my gratitude for the Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM! This tutorial has been incredibly helpful in my journey to learn and apply advanced AI techniques in my projects. The clear explanations and step-by-step examples have made it easy for me to understand and implement these powerful tools. Thank you for sharing your knowledge and expertise!

    • @DonG-1949
      @DonG-1949 2 หลายเดือนก่อน

      This is clearly a bot-written comment, but why? What's their endgame? So many bots with puzzling intentions

  • @briancoalson
    @briancoalson 6 หลายเดือนก่อน +12

    Some helpful things when going through this:
    - Your Python version needs to be

    • @mikewebb3855
      @mikewebb3855 6 หลายเดือนก่อน

      For me, once installed xcode, rerun installing the package and was able to get llama_cp_python wheel to install. Thanks for this note, helped make sense of the error message.

    • @dearadulthoodhopeicantrust6155
      @dearadulthoodhopeicantrust6155 6 หลายเดือนก่อน

      Yup. I encountered this on windows. On my Visual studio I used the ctrl + shift + p opens a search bar. I searched for interpreter and then I was able to access previous versions of python in different environments, I selected Conda environment and opened a new Terminal. I checked python --version and then the selected python version was up.

  • @xero5159
    @xero5159 หลายเดือนก่อน +1

    Thanks to you, now I can create an agent with Ollama and LlamaIndex. I have been working on this topic for a month. Very headache. Now, it is solved. Thank you very much.

  • @ft4jemc
    @ft4jemc 7 หลายเดือนก่อน +9

    Great video. Would really like to see methods that didn't involve reaching out to the cloud but keeping everything local.

  • @samliske1482
    @samliske1482 6 หลายเดือนก่อน +3

    You are by far my favorite tech educator on this platform. Feels like you fill in every gap left by my curriculum and inspire me to go further with my own projects. Thanks for everything!

  • @seanbergman8927
    @seanbergman8927 7 หลายเดือนก่อน +2

    Excellent demo! I liked seeing it built in vs code with loops, unlike many demos that are in Jupyter notebooks and can’t run this way.
    Regarding more demos like this…Yes!! Most definitely could learn a lot from more and more advanced LlamaIndex agent demos. Would be great to see a demo that uses their chat agent and maintain chat state for follow-up questions. Even more advanced and awesome would be an example where the agent will ask a follow up question if it needs more information to complete a task.

  • @valesanchez6336
    @valesanchez6336 3 หลายเดือนก่อน

    I have never found anyone that explains code and concepts as well as you. Thank you for everything you do, it really means a lot♥♥

  • @davidtindell950
    @davidtindell950 6 หลายเดือนก่อน +1

    Thank You for this very informative video. I really like the capabilities of 'LlamaIndex' with PDF's.
    I used it to process several of my own medium-size PDF's and it was very quick and correct.
    It would be great to have another vid on how to save and reuse the VectorStore for queries
    against PDF's already processed. To me this is more important even than the code generation.

  • @ChadHuffman
    @ChadHuffman 7 หลายเดือนก่อน +1

    Amazing as always, Tim. Thanks for spending the time to walk through this great set of tools. I'm looking forward to trying this out with data tables and PDF articles on parsing these particular data sets to see what comes out the other side. If you want to take this in a different direction, I'd love to see how you would take PDFs on how different parts of a system work and their troubleshooting methodology and then throw functional data at the LLM with errors you might see. I suspect (like other paid LLMs) it could draw some solid conclusions. Cheers!

  • @beautybarconn
    @beautybarconn 6 หลายเดือนก่อน +3

    No idea what’s going on but I love falling asleep to these videos 😊

  • @techgiantt
    @techgiantt 7 หลายเดือนก่อน +4

    Just used your code with llama 3, and made the code generator a function tool, and it was fvcking awesome. Thanks for sharing👍🏻

  • @martin-xq7te
    @martin-xq7te 5 หลายเดือนก่อน

    Great work TIM you hit it on the head ,what put people of is downloading. Putting into a requirements file is a great idea

  • @garybpt
    @garybpt 7 หลายเดือนก่อน

    This was fascinating, I'm definitely going to be giving it a whirl! I'd love to learn how something like this could be adapted to write articles using information from our own files.

  • @vaughanjackson2262
    @vaughanjackson2262 6 หลายเดือนก่อน +4

    Great vid.. only issue is the fact that the parsing is done externally. For RAG's ingesting sensitive data this would be a major issue.

    • @debeerpaul
      @debeerpaul 3 หลายเดือนก่อน

      Yeah that's probably why its a free service. They take your clients sensitive info and train their own A.I. Not good.

  • @Batselot
    @Batselot 7 หลายเดือนก่อน +9

    I was really looking forward to learn this. Thanks for the video

  • @trk1139
    @trk1139 3 หลายเดือนก่อน

    Your explanation is quite effective. Could you let me know when the next video is scheduled for release on similar topic?

  • @zmazadi
    @zmazadi 2 หลายเดือนก่อน

    You are truly amazing in explaining concepts. it is like you have fully understood it yourself that is why you can explain them really well. I am trying to get the autocomplete of VS code to work on Mac but nothing works. which extension are you using?

  • @siddharthp9216
    @siddharthp9216 5 หลายเดือนก่อน

    The way you explain is really good and I understood it , you code line by line others just copy paste and donot explain what the code is doing but you explained everything really good content
    ALso can you bring more tutorial using mutlti agent in crew ai using this multi local llm model thing coz the open ai key is very expensive and all the other channel use that none does it in the local llm

  • @equious8413
    @equious8413 6 หลายเดือนก่อน

    "If I fix these up." My god, Tim. You know that won't scale.

  • @seanh1591
    @seanh1591 7 หลายเดือนก่อน +2

    Tim - thanks for the wonderful video. Very well done sir!! Is there an alternative to LlamaParse to keep the parsing local?

  • @_HodBuri_
    @_HodBuri_ 6 หลายเดือนก่อน +14

    Error 404 not found - local host - api - chat [FIX]
    If anyone else gets an error like that when trying to run the llamacode agent, just run the llamacode llm in terminal to download it, as it did not download it automatically for me at least as he said around 29:11
    So similar to what he showed at the start with Mistral:
    ollama run mistral.
    You can run this in a new terminal to download codellama:
    ollama run codellama

    • @aishwarypatil8708
      @aishwarypatil8708 6 หลายเดือนก่อน +2

      thanks alot !!!!

    • @firasarfaoui2739
      @firasarfaoui2739 6 หลายเดือนก่อน +1

      i love this community ... thanks alot

    • @jishh7
      @jishh7 5 หลายเดือนก่อน +1

      @TechWithTim This should be pinned :D

    • @umutsonmez5214
      @umutsonmez5214 4 หลายเดือนก่อน

      you are my hero bro . Thiss problem so f*cking disgusting.Than you my honey

    • @sahinomeerr
      @sahinomeerr หลายเดือนก่อน

      some heroes don't wear cape

  • @shopvictor
    @shopvictor หลายเดือนก่อน

    im 16yo and this is the best video tutorial on llm agent!

    • @Ayush-_-007
      @Ayush-_-007 19 วันที่ผ่านมา +1

      you still want people to say...oouuu he is only 16 wow he has potentia.....shit

  • @nour.mokrani
    @nour.mokrani 7 หลายเดือนก่อน +2

    Thanks for this tutorial and your way of explaining, I've been looking for this ,
    Can you also make a vid on how to build enterprise grade generative ai with Nemo Nvidia that would be so interesting, thanks again

  • @jorgitozor
    @jorgitozor 7 หลายเดือนก่อน

    This is very clear and very instructive, so much valuable information! Thanks for your work

  • @robertwclayton6962
    @robertwclayton6962 6 หลายเดือนก่อน

    Great video tutorial! Thanks 🙌
    (liked and subscribed, lol)
    A bit of a "noob" developer here, so vids like this really help.
    I know it's a lot to ask, but....
    I was wondering if you might consider showing us how to build a more modular app, where we have separate `.py` files to ingest and embed our docs, then another to create and/or add embeddings to a vector DB (like Chroma), then another for querying the DB. Would this be possible?
    It would be nice to know how to have data from one Python file feed data to another, while also minimizing redundancy (e.g., IF `chroma_db` already exists, the `query.py` file will know to load the db and query with LlamaIndex accordingly)
    Even better if you can show us how make our `query_engine` remember users' prior prompts (during a single session).
    Super BONUS POINTS if you can show us how to then feed the `query.py` data into a front-end interface for an interactive chat with a nice UI.
    Phew! That was a lot 😂

  • @blissfulDew
    @blissfulDew 6 หลายเดือนก่อน +1

    Thanks for this!! Unfortunately I can't run it on my laptop, it takes forever and the AI seems confused. I guess it needs powerful machine...

  • @Pushedrabbit699-lk6cr
    @Pushedrabbit699-lk6cr 7 หลายเดือนก่อน +3

    Could you also do a video on infinite world generation using chunks for RPG type pygame games?

  • @Ayush-_-007
    @Ayush-_-007 19 วันที่ผ่านมา +1

    11:20
    If your ollama command doesn't work like mine you can try reinstalling then restarting. If not then try manually add it to path.

  • @ravi1341975
    @ravi1341975 7 หลายเดือนก่อน +2

    wow this is absolutely mind blowing ,thanks Tim.

  • @WismutHansen
    @WismutHansen 7 หลายเดือนก่อน +3

    You obviously went to the Matthew Berman School of I'll revoke this API Key before publishing this video!

  • @ricardokullock2535
    @ricardokullock2535 6 หลายเดือนก่อน

    The guys at llmware have some fone-tuned models for RAG and some for function calling (outputing structured data). Could be interesting to try out with this.

  • @nikta456
    @nikta456 6 หลายเดือนก่อน

    Please create a video about production-ready AI agents!

  • @siddharthp9216
    @siddharthp9216 5 หลายเดือนก่อน

    I really loved the video please keep making videos like this

  • @imramugh
    @imramugh 2 หลายเดือนก่อน

    Thank you for this video... it was really informative.

  • @Ayush-_-007
    @Ayush-_-007 19 วันที่ผ่านมา

    Guys if there is some kind of problem downloading the modules with different versions you can remove the specified versions then try installing again....
    this is for slakers like me
    create a python program ti delete the version text which is after == (ask chatgpt if you can't)

  • @billturner2112
    @billturner2112 6 หลายเดือนก่อน

    I liked this. Out of curiosity, why venv rather than Conda?

  • @mohanvenkataraman648
    @mohanvenkataraman648 4 หลายเดือนก่อน

    Great video tutorial or walk-thru. It would be nice to determine minimum configuration required to run. I tried the example on a Xeon 4 core Ubuntu laptop , 16GB with a NVIDIA Corporation GM107GLM [Quadro M2000M] / Mesa Intel® HD . Sometimes it gave a bunch of errors and I had to do cold restart. Also, the only difference in an Ollama versus non-Ollama version should be the instantiation of the LLM and embedding model. Am I right?

  • @sethngetich4144
    @sethngetich4144 7 หลายเดือนก่อน +5

    I keep getting errors when trying to install the dependencies from requirements.txt

    • @I2ealTuber
      @I2ealTuber 6 หลายเดือนก่อน +1

      Make sure you have the correct version of python

    • @AndrewH.Agbezin
      @AndrewH.Agbezin 5 หลายเดือนก่อน

      or better since I prefer he pip install them manually

  • @mredmister3014
    @mredmister3014 5 หลายเดือนก่อน

    Good video but do you have a complete ai agent with your own data without the coding formatting? This is the closest tutorial I’ve found to do on premise ai agent implementation that I can understand. Thanks!

  • @willlywillly
    @willlywillly 7 หลายเดือนก่อน

    Another great tutorial... Thank You! How do I get in touch with you Tim for consultant?

    • @TechWithTim
      @TechWithTim  7 หลายเดือนก่อน +2

      Send an email to the email listed on my about page on youtube

  • @AaronGayah-dr8lu
    @AaronGayah-dr8lu 5 หลายเดือนก่อน

    This was brilliant, thank you.

  • @tomasemilio
    @tomasemilio 6 หลายเดือนก่อน

    Bro your videos are gold.

  • @camaycama7479
    @camaycama7479 6 หลายเดือนก่อน

    Awesome video, man thx a big bunch!

  • @purvislewies3118
    @purvislewies3118 6 หลายเดือนก่อน

    yes man...this what i want to do and more...

  • @bigbena23
    @bigbena23 6 หลายเดือนก่อน +1

    What if I don't my data to be manipulated in the cloud? Is there an alternative for LlamaParser that can be ran locally?

  • @henrylam4934
    @henrylam4934 7 หลายเดือนก่อน

    Thanks for the tutorial. Is there any alternate to LlamaParse that allows me to run the application completely local?

  • @camaycama7479
    @camaycama7479 6 หลายเดือนก่อน

    Does the mistral large will be available ? I'm wondering if the LLM availability will be up to date or there's other step to do.

  • @mayerxc
    @mayerxc 7 หลายเดือนก่อน +6

    What are your MacBook Pro specs? I'm looking for a new computer to run llm locally.

    • @techgiantt
      @techgiantt 7 หลายเดือนก่อน +6

      Buy a workstation with very good Nvidia gpu, so u can use cuda. If u still want to go for a MacBook Pro, get the M2 with 32gb or 64gb ram. I’m using a MacBook m1 16” 16gb ram and I can only run llms with 7 - 13b without crashing it

    • @TechWithTim
      @TechWithTim  7 หลายเดือนก่อน +2

      I have an M2 Max

    • @GiustinoEsposito98
      @GiustinoEsposito98 7 หลายเดือนก่อน

      Have you ever thought about using colab as a remote webserver with local llm such as llama3 and calling it from your pc to get predictions? I have your same problem and was thinking about solving like this

    • @iamderrickfoo
      @iamderrickfoo 5 หลายเดือนก่อน

      My mbp pro m1 8gb is hanging while running the llm locally. Any alternatives that we can learn to build without killing my mbp?

  • @danyloustymenko7465
    @danyloustymenko7465 7 หลายเดือนก่อน +1

    What's the latency of models running locally?

  • @avxqt001
    @avxqt001 7 หลายเดือนก่อน +1

    I can't install packages of llama-index in my Windows system. Also, the 'guidance' package is showing an error

  • @themax2go
    @themax2go 4 หลายเดือนก่อน

    neat! but why not multi-agent dev team that evaluates (qa) and reiterates on code that fails qa?

  • @jay.ogayon
    @jay.ogayon 7 หลายเดือนก่อน

    what keyboard are you using? 😊

  • @ben3ng933
    @ben3ng933 4 หลายเดือนก่อน

    This is awesome.

  • @song1749
    @song1749 3 หลายเดือนก่อน

    Awesome 👍

  • @LourdesMarín-b5h
    @LourdesMarín-b5h 6 หลายเดือนก่อน +1

    Why did I need to downgrade python 3.12 to 11 to be able to run requirements.txt which some dependencies were calling to use a version less than 3.12 but I see you are using python 3 with no errors?

    • @tomgreen8246
      @tomgreen8246 หลายเดือนก่อน

      Guessing you are using Windows? Sometimes you need a different library / adapted library for Windows. It's easier to follow someone who's a Windows developer, but once you get used to the nuances, it's pretty simple.
      Or you can just use a WSL2 Ubuntu project, or Docker, and it all works fine.

  • @ChathurangaBW
    @ChathurangaBW 6 หลายเดือนก่อน

    just awesome !

  • @kodiak809
    @kodiak809 7 หลายเดือนก่อน

    so ollama is run locally in your machine? can i make it cloud based by applying it into my backend?

  • @kumaronchat
    @kumaronchat 18 วันที่ผ่านมา

    I am using Python 3.11.6 and in my Windows OS I installed C++ developer tools options. but getting this error "Building wheels for collected packages: guidance, llama-cpp-python
    Building wheel for guidance (pyproject.toml) ... error
    error: subprocess-exited-with-error"
    Shall I proceed with this?

  • @samwamae6498
    @samwamae6498 7 หลายเดือนก่อน +1

    Awesome 💯

  • @Czarlsen
    @Czarlsen 6 หลายเดือนก่อน

    Is there much difference between result_type = "Markdown" and result_type = "text"?

  • @Pythonist_01
    @Pythonist_01 7 หลายเดือนก่อน

    I did one using Llama2.

    • @giovannip.6473
      @giovannip.6473 7 หลายเดือนก่อน

      are you sharing it somewhere?

  • @RolandDewonou
    @RolandDewonou 6 หลายเดือนก่อน

    it seems multiple elements in the requirements.txt doc require different versions of python and other libraries. Could you clarify what versions what what is needed in order for this to work?

  • @anandvishwakarma933
    @anandvishwakarma933 7 หลายเดือนก่อน

    Hey can you shared the system configuration need to run this application ?

  • @nikhilv6732
    @nikhilv6732 4 หลายเดือนก่อน +1

    can anyone reply to me wt pre requisites to learn for this??

  • @vedantbande5682
    @vedantbande5682 6 หลายเดือนก่อน

    how to know the requirements.txt dependencies we required (it is a large list)

  • @hamsehassan7304
    @hamsehassan7304 6 หลายเดือนก่อน

    everytime i try to install the requirements.txt files, it only downloads some of the content but then i get this error message: Requires-Python >=3.8.1, im runnning this on a mac with python version 3.12.3 and I can't seem to download the older version of python.

  • @adilzahir9921
    @adilzahir9921 7 หลายเดือนก่อน

    Can i use that to make ai agent that can call customers and interact with them and take notes of what's happens ? Thank's

  • @kayoutube690
    @kayoutube690 6 หลายเดือนก่อน

    New subscriber here!!!

  • @257.4MHz
    @257.4MHz 7 หลายเดือนก่อน +2

    Well, I can't get it to work. It gives 404 on /api/chat

    • @omkarkakade3438
      @omkarkakade3438 6 หลายเดือนก่อน +1

      I am getting the same error

    • @mrarm4x
      @mrarm4x 6 หลายเดือนก่อน +2

      you are probably getting this error because you are missing the codellama model, run ollama pull codellama and it should fix it

  • @Marven2
    @Marven2 7 หลายเดือนก่อน

    Can you make a series

  • @mustafa.atamer
    @mustafa.atamer หลายเดือนก่อน

    Llamaparser do that unlocal, it cause that it can not be used for enterprise. Is there any way to do this fully local?

  • @amruts4640
    @amruts4640 7 หลายเดือนก่อน

    Can you please do a video about making a gui in python

  • @DomenicoDiFina
    @DomenicoDiFina 7 หลายเดือนก่อน

    Is it possible to create an agent using other languages?

  • @AndyPandy-ni1io
    @AndyPandy-ni1io 5 หลายเดือนก่อน

    what am i doing wrong cause when I run it does not work no matter what I try

  • @radheyakhade9853
    @radheyakhade9853 7 หลายเดือนก่อน

    Can anyone tell me what basic things should one know before going into this video ??

  • @rajansikarwar3500
    @rajansikarwar3500 หลายเดือนก่อน

    when im using llama.31 my llm response gets stuck in a loop of action observation action observation. what to do?

    • @Idiot123009
      @Idiot123009 หลายเดือนก่อน +1

      This is the limitation of llama3.1 and all llamas.
      Once we pass on tools it always ready to call tools even the query don't need it.

  • @joshuaarinaitwe8351
    @joshuaarinaitwe8351 7 หลายเดือนก่อน

    Hey tim. Great video. I have been watching your videos for some time, though i was definitely young then. I need some guidance. Am 17, i want to do ai and machine learning course. Somebody advise me.

  • @ghazalrafique4012
    @ghazalrafique4012 หลายเดือนก่อน

    I can't move past this error "No module named 'llama_index.llms.ollama'". I have tried to uninstall and install llama_index and I have also downgraded python version. Did anyone else run into this?

  • @JRis44
    @JRis44 5 หลายเดือนก่อน

    Dang seems im stuck with a 404 message @ 31:57.
    Anyone else have that issue? Or have a fix for it possibly? Maybe the dependencies need an update already?

  • @ofeksh
    @ofeksh 7 หลายเดือนก่อน

    Hi Tim!
    GREAT JOB on pretty much everything!
    BUT, i have a problem
    im running on windows with pycharm and it shows me an error when installing the requirements,
    because its pycharm, i have 2 options for installing the requirements, one from within pycharm and one from the terminal
    FIRST ERROR (when i install through pycharm)
    in both options im seeing an error (similar one, but not exactly the same)
    can you please help me with it?

    • @diegoromo4819
      @diegoromo4819 7 หลายเดือนก่อน +1

      you can check which python version you have installed.

    • @ofeksh
      @ofeksh 7 หลายเดือนก่อน

      @@diegoromo4819hey, thank you for your response, which version should i have? i can't find it in the video.

    • @neilpayne8244
      @neilpayne8244 7 หลายเดือนก่อน +1

      @@ofeksh 3.11

    • @ofeksh
      @ofeksh 7 หลายเดือนก่อน

      @@neilpayne8244 shit, that's my version...

  • @nikta456
    @nikta456 6 หลายเดือนก่อน

    Problems ?
    # make sure the LLM is listening
    `pip install llama-index qdrant_client torch transformers` `pip install llama-index-llms-ollama`
    # didn`t download codellama
    `ollama pull codellama`
    # timeout error
    set request_timeout to 500.

  • @JNET_Reloaded
    @JNET_Reloaded 6 หลายเดือนก่อน

    nice

  • @Darkvader9427
    @Darkvader9427 6 หลายเดือนก่อน

    can i do the same using langchain

  • @SashoSuper
    @SashoSuper 7 หลายเดือนก่อน

    Nice one

  • @unflappableunflappable1248
    @unflappableunflappable1248 7 หลายเดือนก่อน

    круто

  • @PANDURANG99
    @PANDURANG99 5 หลายเดือนก่อน

    multiple pdf at a time and pdf contains drawing, how to make

  • @dezly-macauley
    @dezly-macauley 7 หลายเดือนก่อน

    I want to learn how to make an AI agent that auto-removes / auto-deletes these annoying spam s3x bot comments on useful TH-cam videos like this.

  • @adilzahir9921
    @adilzahir9921 7 หลายเดือนก่อน

    What the minimum laptop to run this model ? Thank's

    • @samohtGTO
      @samohtGTO 6 หลายเดือนก่อน +1

      you need a good gpu to run like litrarly any llm

  • @meeFaizul
    @meeFaizul 7 หลายเดือนก่อน

    ❤❤❤❤❤❤

  • @Ari-pq4db
    @Ari-pq4db 7 หลายเดือนก่อน

    Nice ❤

  • @technobabble77
    @technobabble77 7 หลายเดือนก่อน

    I'm getting the following when I run the prompt:
    Error occured, retry #1: timed out
    Error occured, retry #2: timed out
    Error occured, retry #3: timed out
    Unable to process request, try again...
    What is this timing out on?

    • @coconut_bliss5539
      @coconut_bliss5539 7 หลายเดือนก่อน

      Your Agent is unable to reach your Ollama server. It's repeatedly trying to query your Ollama server's API on localhost, then those requests are timing out. Check if your Ollama LLM is initializing correctly. Also make sure your Agent constructor contains the correct LLM argument.

    • @TballaJones
      @TballaJones 6 หลายเดือนก่อน

      Do you have a VPN like NordVPN running? Sometimes that can't mess up local servers

    • @adithyav6877
      @adithyav6877 5 หลายเดือนก่อน

      change the request_timeout to a bigger value, like 3600.0

  • @ases4320
    @ases4320 7 หลายเดือนก่อน

    But this is not completely "local" since you need an api key, no?

    • @matteominellono
      @matteominellono 7 หลายเดือนก่อน

      These APIs are used within the same environment or system, enabling different software components or applications to communicate with each other locally without the need to go through a network.
      This is common in software libraries, operating systems, or applications where different modules or plugins need to interact.
      Local APIs are accessed directly by the program without the latency or the overhead associated with network communications.

  • @notaras1985
    @notaras1985 6 หลายเดือนก่อน

    How do we know that Meta hasn't corrupted the ollama model with spyware or other malicious code?

  • @Meir-ld2yi
    @Meir-ld2yi 6 หลายเดือนก่อน

    ollama mistral work so slowly that even hello take like 20 min

  • @Aiden-rz6vf
    @Aiden-rz6vf 7 หลายเดือนก่อน

    Llama 3

  • @dolapoadefisayomioluwole1341
    @dolapoadefisayomioluwole1341 7 หลายเดือนก่อน

    First to comment today 😂

  • @kazmi401
    @kazmi401 7 หลายเดือนก่อน

    Why youtube does not add my comment. F*CK