I recently rediscovered your channel after losing track of it for a while. Back in the day, I remember you were all about general IT content, so it's great to see you active again! As a computer scientist and AI engineer, I almost turned my back on AI due to the limitations of early models. However, the advent of transformers, attention mechanisms, and other breakthroughs reignited my passion. I studied AI at MIT and, honestly, I used to think it might have been in vain-turns out, I was wrong! I've been deeply involved in AI research for the past two years, publishing articles and currently working on enhancing Retrieval-Augmented Generation for sectors like finance, healthcare, and law. It’s exhilarating to pivot away from IT infrastructure and network management. I definitely don’t miss developing point-of-sale systems; I’m much happier innovating in AI!
You helped me a lot 9 years ago with your network videos. Glad to see you’re still here! Also what a shame your channel is getting so little views nowadays.
This is all more fascinating than ever now with Llama3.1. I have beem plugging tagged satellite and inertial data into the my message content to have a time, inertial and geolocationally aware llama, it works really well, its amazisang. the plan is more embedded systems for more sensors. The absolutely insane thing with llama knowing time and location is that it can potentially work out where and what direction anything else is in time. And it works!
Hi Eli, I was wondering if you could do a video on implementing an LLM and then fine-tuning it for some business use-case example. That would be so interesting. Love this video.
Great video. Eli format is the best, he is the person I would want to have on the team. Could you kindly advice any course/book/article/video to understand what inside LLM training? What is the basics that made them work?
I think they named wrong , I think is V.I virtual Intelligence and not Artifical Intelligence, the difference is VI need to be online and A.I is suppose to be like a brain
very helpful and well explained. Many thanks. But it does not work. I get the following error: Traceback (most recent call last): File "/.../Python Scripts/ollama.py", line 1, in import ollama File "/.../Python Scripts/ollama.py", line 26, in answer = ask(query) File "/.../Python Scripts/ollama.py", line 9, in ask response = ollama.chat(model = 'llama3', AttributeError: partially initialized module 'ollama' has no attribute 'chat' (most likely due to a circular import) do I need a localhost configured for that? Ollama is installed on MacOs, ollama lib is pip installed. works well on terminal. Any hints? Thanks
Great video, thanks! Allowed me to wrap my head around doing this locally
I recently rediscovered your channel after losing track of it for a while. Back in the day, I remember you were all about general IT content, so it's great to see you active again! As a computer scientist and AI engineer, I almost turned my back on AI due to the limitations of early models. However, the advent of transformers, attention mechanisms, and other breakthroughs reignited my passion. I studied AI at MIT and, honestly, I used to think it might have been in vain-turns out, I was wrong! I've been deeply involved in AI research for the past two years, publishing articles and currently working on enhancing Retrieval-Augmented Generation for sectors like finance, healthcare, and law. It’s exhilarating to pivot away from IT infrastructure and network management. I definitely don’t miss developing point-of-sale systems; I’m much happier innovating in AI!
You helped me a lot 9 years ago with your network videos. Glad to see you’re still here! Also what a shame your channel is getting so little views nowadays.
Wow I did’t realize this finally came out. I’m told about all the roars but this was hiding in my feed.
Thanks for the great content.
Enjoyed the video, Its really cool Ollama can also read images. I've really been enjoying LM Studio lately.
This is all more fascinating than ever now with Llama3.1. I have beem plugging tagged satellite and inertial data into the my message content to have a time, inertial and geolocationally aware llama, it works really well, its amazisang. the plan is more embedded systems for more sensors.
The absolutely insane thing with llama knowing time and location is that it can potentially work out where and what direction anything else is in time. And it works!
This video is better than Obama
Olama, better than Obama!
Hi Eli, I was wondering if you could do a video on implementing an LLM and then fine-tuning it for some business use-case example. That would be so interesting.
Love this video.
Love it.
Great video. Eli format is the best, he is the person I would want to have on the team. Could you kindly advice any course/book/article/video to understand what inside LLM training? What is the basics that made them work?
Thank you.
I think they named wrong , I think is V.I virtual Intelligence and not Artifical Intelligence, the difference is VI need to be online and A.I is suppose to be like a brain
very helpful and well explained. Many thanks.
But it does not work. I get the following error:
Traceback (most recent call last):
File "/.../Python Scripts/ollama.py", line 1, in
import ollama
File "/.../Python Scripts/ollama.py", line 26, in
answer = ask(query)
File "/.../Python Scripts/ollama.py", line 9, in ask
response = ollama.chat(model = 'llama3',
AttributeError: partially initialized module 'ollama' has no attribute 'chat' (most likely due to a circular import)
do I need a localhost configured for that? Ollama is installed on MacOs, ollama lib is pip installed. works well on terminal. Any hints? Thanks
Is there a way to load in some training data with olama?
I’m running llama3 on a plain old M1 iMac and it seldom takes more than 20 seconds per request.
why its not taking full gpu instead of cpu? pls guide to use full gpu
Phi always want to tell me something about a village with five houses. XD
I watch my llama process one word at a time
llama lectures me about Python prompt injections being illegal
I do have a lot of fun with Mistral, but it's slow, too
import ollama
^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'ollama'
pip3 install ollama ... you have to also install the Ollama module for python
@@elithecomputerguy not working still the same error
VScode is probably using the wrong interpreter... Google on how to troubleshoot
@@elithecomputerguy ok thank