Give Internet Access to your Local LLM (Python, Ollama, Local LLM)

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ต.ค. 2024
  • Give your local LLM model internet access using Python. The LLM will be able to use the internet to find information relevant to the user's questions.
    Join the Discord: / discord
    Library Used:
    github.com/emi...
  • ภาพยนตร์และแอนิเมชัน

ความคิดเห็น • 23

  • @Oxxygen_io
    @Oxxygen_io 5 หลายเดือนก่อน +3

    This was great, thanks for the introduction.
    Love to see a deep dive to have this add on as an extra where you get command prompt with internet search. Like running "ollama but now with internet"

    • @polymir9053
      @polymir9053  5 หลายเดือนก่อน +1

      Thanks! I appreciate the suggestion.
      There is a small little demo showing how this can be used to make a command prompt chat where you can chat with the online agent. Here is the link: github.com/emirsahin1/llm-axe/blob/main/examples/ex_online_chat_demo.py

  • @_areck_
    @_areck_ 3 หลายเดือนก่อน +1

    amazing video, extremely underrated channel. Good work, I needed this to complete my program for an assistant model using ollama that has the capability to create files, run files, edit the contents of files, search the web, and maintain a persistent memory. This was the second to last thing I needed to finish it up, now I just need to finish the run files part.

  • @ViralComparison
    @ViralComparison 4 หลายเดือนก่อน +1

    perfect! need more videos like these

  • @voltexripper8367
    @voltexripper8367 2 หลายเดือนก่อน

    very good video bro

  • @irkedoff
    @irkedoff 5 หลายเดือนก่อน +1

    💜

  • @MrOnePieceRuffy
    @MrOnePieceRuffy หลายเดือนก่อน

    hello my friend, thank you for the helpful video! i learned what i was actually looking for. Can you do me a favour and open up your OBS and click on the 3 dot button under your Mic/Aux and then you choose Filters. There you press + (Plus) - Button in the left corner and choose "Noise Supression" and create one of this in its default settings. Thank you very much

  • @HeRaUSA
    @HeRaUSA 11 วันที่ผ่านมา

    Sir this video can explain mode and provide information from big article

  • @shortvideosfullofstupidity9534
    @shortvideosfullofstupidity9534 หลายเดือนก่อน

    what about reading from a website or giving you a link based on a question like "find me 3 inches pvd pipes"

  • @paleostressmanagement
    @paleostressmanagement 4 หลายเดือนก่อน +1

    Can this be used with the Ollama API? If so, how?

    • @polymir9053
      @polymir9053  4 หลายเดือนก่อน +1

      Yes, I'm using Ollama in the video. It has built in support for the Ollama API through the OllamaChat class. See this example: github.com/emirsahin1/llm-axe/blob/main/examples/ex_online_agent.py

    • @paleostressmanagement
      @paleostressmanagement 4 หลายเดือนก่อน

      @@polymir9053 Thanks! But i am still a bit confused as to how to use this with the ollama API example for a chat completion?
      curl localhost:11434/api/chat -d '{
      "model": "llama3",
      "messages": [
      {
      "role": "user",
      "content": "why is the sky blue?"
      }
      ],
      "stream": false
      }'

  • @edengate1
    @edengate1 4 หลายเดือนก่อน

    Is it better to open various tabs from the same LLM in case i want to ask different subjects like we do in ChatGPT? Or i can use only one chat for everything i want to do?

    • @polymir9053
      @polymir9053  4 หลายเดือนก่อน +1

      You can use a single Agent for multiple subjects. While agents do keep track of history, chat history is only used if passed in along with the question.

  • @jumpersfilmedinvr
    @jumpersfilmedinvr 2 หลายเดือนก่อน

    how much more advanced can we create a local llm to be than the censored versions available publically?

    • @polymir9053
      @polymir9053  2 หลายเดือนก่อน +1

      I think alot of the functionalities of platforms like chatgpt are quite easily replicated with function calling and agents. The hard part usually is being able to locally run a large enough llm that can reliably follow the system prompts . You can only do so much with wrapper code, eventually it all boils down to how good the LLM your using is. If your GPU poor like me, I'd recommend looking into Groq cloud, they have quite generous amounts of free API access to a lot of different llm models.

    • @jumpersfilmedinvr
      @jumpersfilmedinvr 2 หลายเดือนก่อน

      @@polymir9053 Yeah Dude that might work out.
      There's got to be tons of ways to creatively use cloud space.
      I would imagine how many agents people can link together in unfathomable networks together. Open source and jail broken

  • @fakhrun4038
    @fakhrun4038 4 หลายเดือนก่อน +1

    Can you make a video for the web ui?

    • @polymir9053
      @polymir9053  4 หลายเดือนก่อน +1

      There is no webui for this, but you with some coding you could easily tie this up to any existing open source chat UIs.

  • @brenden_Li
    @brenden_Li 4 หลายเดือนก่อน

    what app executing the code with

    • @polymir9053
      @polymir9053  4 หลายเดือนก่อน

      It's just Python and Ollama.

  • @themax2go
    @themax2go 2 หลายเดือนก่อน

    "Join the Discord" => "invalid invite" 😒

    • @polymir9053
      @polymir9053  2 หลายเดือนก่อน

      Sorry about that, try this: discord.com/invite/4DyMcRbK4G