Give Internet Access to your Local LLM (Python, Ollama, Local LLM)
ฝัง
- เผยแพร่เมื่อ 7 ต.ค. 2024
- Give your local LLM model internet access using Python. The LLM will be able to use the internet to find information relevant to the user's questions.
Join the Discord: / discord
Library Used:
github.com/emi... - ภาพยนตร์และแอนิเมชัน
This was great, thanks for the introduction.
Love to see a deep dive to have this add on as an extra where you get command prompt with internet search. Like running "ollama but now with internet"
Thanks! I appreciate the suggestion.
There is a small little demo showing how this can be used to make a command prompt chat where you can chat with the online agent. Here is the link: github.com/emirsahin1/llm-axe/blob/main/examples/ex_online_chat_demo.py
amazing video, extremely underrated channel. Good work, I needed this to complete my program for an assistant model using ollama that has the capability to create files, run files, edit the contents of files, search the web, and maintain a persistent memory. This was the second to last thing I needed to finish it up, now I just need to finish the run files part.
perfect! need more videos like these
very good video bro
💜
hello my friend, thank you for the helpful video! i learned what i was actually looking for. Can you do me a favour and open up your OBS and click on the 3 dot button under your Mic/Aux and then you choose Filters. There you press + (Plus) - Button in the left corner and choose "Noise Supression" and create one of this in its default settings. Thank you very much
Sir this video can explain mode and provide information from big article
what about reading from a website or giving you a link based on a question like "find me 3 inches pvd pipes"
Can this be used with the Ollama API? If so, how?
Yes, I'm using Ollama in the video. It has built in support for the Ollama API through the OllamaChat class. See this example: github.com/emirsahin1/llm-axe/blob/main/examples/ex_online_agent.py
@@polymir9053 Thanks! But i am still a bit confused as to how to use this with the ollama API example for a chat completion?
curl localhost:11434/api/chat -d '{
"model": "llama3",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
],
"stream": false
}'
Is it better to open various tabs from the same LLM in case i want to ask different subjects like we do in ChatGPT? Or i can use only one chat for everything i want to do?
You can use a single Agent for multiple subjects. While agents do keep track of history, chat history is only used if passed in along with the question.
how much more advanced can we create a local llm to be than the censored versions available publically?
I think alot of the functionalities of platforms like chatgpt are quite easily replicated with function calling and agents. The hard part usually is being able to locally run a large enough llm that can reliably follow the system prompts . You can only do so much with wrapper code, eventually it all boils down to how good the LLM your using is. If your GPU poor like me, I'd recommend looking into Groq cloud, they have quite generous amounts of free API access to a lot of different llm models.
@@polymir9053 Yeah Dude that might work out.
There's got to be tons of ways to creatively use cloud space.
I would imagine how many agents people can link together in unfathomable networks together. Open source and jail broken
Can you make a video for the web ui?
There is no webui for this, but you with some coding you could easily tie this up to any existing open source chat UIs.
what app executing the code with
It's just Python and Ollama.
"Join the Discord" => "invalid invite" 😒
Sorry about that, try this: discord.com/invite/4DyMcRbK4G