Ollama and Python for Local AI LLM Systems (Ollama, Llama2, Python)

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ต.ค. 2024

ความคิดเห็น • 28

  • @MADMOGtheFrugal
    @MADMOGtheFrugal 7 หลายเดือนก่อน +6

    Great video, thanks! Allowed me to wrap my head around doing this locally

  • @Pure_Science_and_Technology
    @Pure_Science_and_Technology 6 หลายเดือนก่อน

    I recently rediscovered your channel after losing track of it for a while. Back in the day, I remember you were all about general IT content, so it's great to see you active again! As a computer scientist and AI engineer, I almost turned my back on AI due to the limitations of early models. However, the advent of transformers, attention mechanisms, and other breakthroughs reignited my passion. I studied AI at MIT and, honestly, I used to think it might have been in vain-turns out, I was wrong! I've been deeply involved in AI research for the past two years, publishing articles and currently working on enhancing Retrieval-Augmented Generation for sectors like finance, healthcare, and law. It’s exhilarating to pivot away from IT infrastructure and network management. I definitely don’t miss developing point-of-sale systems; I’m much happier innovating in AI!

  • @pedrogorilla483
    @pedrogorilla483 6 หลายเดือนก่อน +1

    You helped me a lot 9 years ago with your network videos. Glad to see you’re still here! Also what a shame your channel is getting so little views nowadays.

  • @lquezada914
    @lquezada914 6 หลายเดือนก่อน +2

    Wow I did’t realize this finally came out. I’m told about all the roars but this was hiding in my feed.
    Thanks for the great content.

  • @imorganmarshall
    @imorganmarshall 7 หลายเดือนก่อน +3

    Enjoyed the video, Its really cool Ollama can also read images. I've really been enjoying LM Studio lately.

  • @holographicsol2747
    @holographicsol2747 หลายเดือนก่อน

    This is all more fascinating than ever now with Llama3.1. I have beem plugging tagged satellite and inertial data into the my message content to have a time, inertial and geolocationally aware llama, it works really well, its amazisang. the plan is more embedded systems for more sensors.
    The absolutely insane thing with llama knowing time and location is that it can potentially work out where and what direction anything else is in time. And it works!

  • @mohcinelayati7765
    @mohcinelayati7765 7 หลายเดือนก่อน +6

    This video is better than Obama

  • @gambers20001
    @gambers20001 7 หลายเดือนก่อน +8

    Olama, better than Obama!

  • @kimaegaii
    @kimaegaii 7 หลายเดือนก่อน

    Hi Eli, I was wondering if you could do a video on implementing an LLM and then fine-tuning it for some business use-case example. That would be so interesting.
    Love this video.

  • @NikolaJeremicwebforma
    @NikolaJeremicwebforma 7 หลายเดือนก่อน +1

    Love it.

  • @andriimarchuk9649
    @andriimarchuk9649 7 หลายเดือนก่อน

    Great video. Eli format is the best, he is the person I would want to have on the team. Could you kindly advice any course/book/article/video to understand what inside LLM training? What is the basics that made them work?

  • @NikolaJeremicwebforma
    @NikolaJeremicwebforma 7 หลายเดือนก่อน

    Thank you.

  • @daniel4net292
    @daniel4net292 7 หลายเดือนก่อน +1

    I think they named wrong , I think is V.I virtual Intelligence and not Artifical Intelligence, the difference is VI need to be online and A.I is suppose to be like a brain

  • @axelwindbrake3908
    @axelwindbrake3908 5 หลายเดือนก่อน

    very helpful and well explained. Many thanks.
    But it does not work. I get the following error:
    Traceback (most recent call last):
    File "/.../Python Scripts/ollama.py", line 1, in
    import ollama
    File "/.../Python Scripts/ollama.py", line 26, in
    answer = ask(query)
    File "/.../Python Scripts/ollama.py", line 9, in ask
    response = ollama.chat(model = 'llama3',
    AttributeError: partially initialized module 'ollama' has no attribute 'chat' (most likely due to a circular import)
    do I need a localhost configured for that? Ollama is installed on MacOs, ollama lib is pip installed. works well on terminal. Any hints? Thanks

  • @0_1_2
    @0_1_2 7 หลายเดือนก่อน

    Is there a way to load in some training data with olama?

  • @mendodsoregonbackroads6632
    @mendodsoregonbackroads6632 5 หลายเดือนก่อน

    I’m running llama3 on a plain old M1 iMac and it seldom takes more than 20 seconds per request.

  • @SomethingSpiritual
    @SomethingSpiritual 6 หลายเดือนก่อน

    why its not taking full gpu instead of cpu? pls guide to use full gpu

  • @cherubin7th
    @cherubin7th 7 หลายเดือนก่อน +1

    Phi always want to tell me something about a village with five houses. XD

  • @0eieiei
    @0eieiei 7 หลายเดือนก่อน

    I watch my llama process one word at a time

    • @0eieiei
      @0eieiei 7 หลายเดือนก่อน

      llama lectures me about Python prompt injections being illegal

    • @0eieiei
      @0eieiei 7 หลายเดือนก่อน

      I do have a lot of fun with Mistral, but it's slow, too

  • @SomethingSpiritual
    @SomethingSpiritual 7 หลายเดือนก่อน

    import ollama
    ^^^^^^^^^^^^^
    ModuleNotFoundError: No module named 'ollama'

    • @elithecomputerguy
      @elithecomputerguy  7 หลายเดือนก่อน

      pip3 install ollama ... you have to also install the Ollama module for python

    • @SomethingSpiritual
      @SomethingSpiritual 7 หลายเดือนก่อน

      @@elithecomputerguy not working still the same error

    • @elithecomputerguy
      @elithecomputerguy  7 หลายเดือนก่อน +1

      VScode is probably using the wrong interpreter... Google on how to troubleshoot

    • @SomethingSpiritual
      @SomethingSpiritual 6 หลายเดือนก่อน

      @@elithecomputerguy ok thank