Ollama and Python for Local AI LLM Systems (Ollama, Llama2, Python)
ฝัง
- เผยแพร่เมื่อ 31 พ.ค. 2024
- Support Classes at - donorbox.org/etcg
Find All Classes at - www.elithecomputerguy.com
LinkedIn at - / eli-etherton-a15362211
Notes and Code - github.com/elithecomputerguy/...
00:00 Introduction
03:54 Demonstration
06:15 Installing Ollama and Models
10:14 Pulling Models and Running Ollama at the Shell
15:41 Using Python with Ollama
24:58 Final Thoughts - วิทยาศาสตร์และเทคโนโลยี
Great video, thanks! Allowed me to wrap my head around doing this locally
You helped me a lot 9 years ago with your network videos. Glad to see you’re still here! Also what a shame your channel is getting so little views nowadays.
I recently rediscovered your channel after losing track of it for a while. Back in the day, I remember you were all about general IT content, so it's great to see you active again! As a computer scientist and AI engineer, I almost turned my back on AI due to the limitations of early models. However, the advent of transformers, attention mechanisms, and other breakthroughs reignited my passion. I studied AI at MIT and, honestly, I used to think it might have been in vain-turns out, I was wrong! I've been deeply involved in AI research for the past two years, publishing articles and currently working on enhancing Retrieval-Augmented Generation for sectors like finance, healthcare, and law. It’s exhilarating to pivot away from IT infrastructure and network management. I definitely don’t miss developing point-of-sale systems; I’m much happier innovating in AI!
Wow I did’t realize this finally came out. I’m told about all the roars but this was hiding in my feed.
Thanks for the great content.
Enjoyed the video, Its really cool Ollama can also read images. I've really been enjoying LM Studio lately.
This video is better than Obama
Love it.
Thank you.
Olama, better than Obama!
Hi Eli, I was wondering if you could do a video on implementing an LLM and then fine-tuning it for some business use-case example. That would be so interesting.
Love this video.
Is there a way to load in some training data with olama?
Great video. Eli format is the best, he is the person I would want to have on the team. Could you kindly advice any course/book/article/video to understand what inside LLM training? What is the basics that made them work?
very helpful and well explained. Many thanks.
But it does not work. I get the following error:
Traceback (most recent call last):
File "/.../Python Scripts/ollama.py", line 1, in
import ollama
File "/.../Python Scripts/ollama.py", line 26, in
answer = ask(query)
File "/.../Python Scripts/ollama.py", line 9, in ask
response = ollama.chat(model = 'llama3',
AttributeError: partially initialized module 'ollama' has no attribute 'chat' (most likely due to a circular import)
do I need a localhost configured for that? Ollama is installed on MacOs, ollama lib is pip installed. works well on terminal. Any hints? Thanks
I’m running llama3 on a plain old M1 iMac and it seldom takes more than 20 seconds per request.
why its not taking full gpu instead of cpu? pls guide to use full gpu
Phi always want to tell me something about a village with five houses. XD
I think they named wrong , I think is V.I virtual Intelligence and not Artifical Intelligence, the difference is VI need to be online and A.I is suppose to be like a brain
I watch my llama process one word at a time
llama lectures me about Python prompt injections being illegal
I do have a lot of fun with Mistral, but it's slow, too
import ollama
^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'ollama'
pip3 install ollama ... you have to also install the Ollama module for python
@@elithecomputerguy not working still the same error
VScode is probably using the wrong interpreter... Google on how to troubleshoot
@@elithecomputerguy ok thank