The Ollama Course - Using the CLI

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ต.ค. 2024

ความคิดเห็น • 30

  • @shuntera
    @shuntera 2 หลายเดือนก่อน +4

    Love this, I think we’ve all been doing just-in-time learning to run and keep up to date with what’s happening every couple of weeks. Great to tear it back to the foundations Matt

  • @AricRastley
    @AricRastley หลายเดือนก่อน +2

    Exactly what I was looking for! THANK YOU!

  • @ABatorfi
    @ABatorfi 2 หลายเดือนก่อน +2

    Thank you for this awesome course, I‘m enjoying it!

  • @jimlynch9390
    @jimlynch9390 2 หลายเดือนก่อน +1

    I'm really enjoying this series. Thanks.

  • @marianoarganaraz
    @marianoarganaraz 2 หลายเดือนก่อน +1

    I love the way you explain. Thanks

  • @artur50
    @artur50 2 หลายเดือนก่อน

    Excellent content Matt! Congrats! Keep on going.

  • @derekf1111
    @derekf1111 2 หลายเดือนก่อน

    I love your videos! Your explanations are amazing, thank you!

  • @fabriai
    @fabriai 2 หลายเดือนก่อน

    Wonderful video, Matt. Thanks so much for sharing this.

  • @federicoarg00
    @federicoarg00 2 หลายเดือนก่อน

    this is amazing, super clear, thank you!

  • @engineermajidd
    @engineermajidd 2 หลายเดือนก่อน

    Looking forward for your next video

    • @technovangelist
      @technovangelist  2 หลายเดือนก่อน

      and here it is.... comparing quantizations

  • @ISK_VAGR
    @ISK_VAGR 2 หลายเดือนก่อน

    Man. I love it. I already subscribe to it. Something I really will be crazy to know is how to story my ollama local models in an external hard drive in Mac. As you know macs doesn't have much space. So, i bought a special hard drive that runs at 40G/sec to have models and other stuff and I will love to have the models in there than in my internal hard drive. Thanks for the great content and explanations.

    • @technovangelist
      @technovangelist  2 หลายเดือนก่อน +1

      They don't have much space?Sure they do. Mine has 4TB inside.
      But you can use the OLLAMA_MODELS environment variable to start storing them elsewhere.

    • @ISK_VAGR
      @ISK_VAGR 2 หลายเดือนก่อน

      @@technovangelist Would you plan to do a video about it?

  • @romulopontual6254
    @romulopontual6254 2 หลายเดือนก่อน

    Very nice! Thank you.

  • @shoelessone
    @shoelessone 2 หลายเดือนก่อน

    Hey Matt! Off topic comment but I guess I'm feeding the ol' TH-cam algorithm anyway!
    I haven't watched your entire backlog so apologies if you've already covered this, but I'd love to see some content / videos on the following topics:
    1. How can you use ollama in a production environment. Topics around infrastructure, reasonable techniques (e.g. handing off processing to async jobs when possible), cost, etc. I'm not sure how common this use case is but I am evaluating using something like llama 3.1 to help summarize some potentially very large text files and weighing cost differences between using something turnkey like openai's APIs vs figuring out hosting myself (well my company. There seems to be a lot less on production hardening some of these open source models (or I just haven't been paying attention!)
    2. A "state of the union" high level overview of the options available to software developer new to using AI. This you have covered in a lot more detail in various forms, but an overview of what tools are actually at a persons disposal in terms of trying to use AI to solve some problem. When I first started looking at this stuff I thought the only options I had were buying a bunch of super computers to train models and learning a lot about doing matrix multiplication. But we have RAG, we have "fine tuning", we have modifying system prompts... a sort of high level overview of what a layperson can do, and perhaps where reasonable off-ramps for more advance use cases are would be super helpful (i.e. when do I need to brunch up on my linear algebra? :))
    Thanks for your work!

  • @MichaPiotrKozowski
    @MichaPiotrKozowski 2 หลายเดือนก่อน

    Thank you!

  • @build.aiagents
    @build.aiagents 2 หลายเดือนก่อน

    More cool stuff please!

  • @NLPprompter
    @NLPprompter 2 หลายเดือนก่อน

    ah... the ollsma serve... LOL i was wasted a week until i realized it was user issue in Linux, i felt so stupid having duplicate models, and things... this is really good video any one new to ollama should watch this if i watch this before i wouldn't waste a week just to realized how stupid i am the simple user issue...

  • @mpesakapoeta
    @mpesakapoeta 2 หลายเดือนก่อน

    Please share the link of the video for reducing the model size for specific tasks, example, only weather, is wouldn't need the hole context for this

    • @technovangelist
      @technovangelist  2 หลายเดือนก่อน

      You would be able to fine tune for that but it wouldn"t reduce the size. Reducing the size would be a very expensive process.

  • @PBrioschi
    @PBrioschi 2 หลายเดือนก่อน

    Hi Matt, thank you for another amazing content.
    I'm working with ollama and other tools available from community to develop some solutions for my company.
    I need some help from a professional consultant for this job.
    Could you work with me, or, maybe, recommend a person who can help me to do it?

  • @pythonantole9892
    @pythonantole9892 2 หลายเดือนก่อน

    I have a noob question. If anybody can upload a model on Ollama, is it possible for a malicious user to upload malware disguised as a model? And are there measures to prevent such a scenario.

    • @technovangelist
      @technovangelist  2 หลายเดือนก่อน

      There would be no way to do that.

  • @sammcj2000
    @sammcj2000 2 หลายเดือนก่อน

    I wouldn’t recommend creating models the legacy Q4_0 quant types, they’re depreciated and are worse quality than K quants (or IQ if you’re running with CUDA)

  • @vulcan4d
    @vulcan4d 2 หลายเดือนก่อน

    Removing models is the most annoying part because you have to name it exact. Wish they made it easier to just select and delete via GUI or list and select to remove by a number

    • @technovangelist
      @technovangelist  2 หลายเดือนก่อน

      That’s one reason I love gollama. I have a video about it.

  • @JNET_Reloaded
    @JNET_Reloaded 2 หลายเดือนก่อน

    what location to run that download hugging face model command? and where does it download to? same location as the others wheres that?

  • @JNET_Reloaded
    @JNET_Reloaded 2 หลายเดือนก่อน

    any1 wana swap code for tokens?