Ollama - Loading Custom Models

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 ม.ค. 2025

ความคิดเห็น • 49

  • @5Komma5
    @5Komma5 11 หลายเดือนก่อน +5

    That worked. Thanks.
    If the Model page lacks information and there is a similar model available you can get the modelfile by loading the model and using
    ollama show --modelfile

    • @samwitteveenai
      @samwitteveenai  11 หลายเดือนก่อน +1

      yes they have added some nice commands since I made this video. You can also make changes now when using the model is running and export those to the setup/modelfile I will try to make a new updated video

    • @MasonJames
      @MasonJames 11 หลายเดือนก่อน +1

      Would also love to see a new version - I've referenced this one several times @@samwitteveenai thank you!

  • @the_real_cookiez
    @the_real_cookiez 10 หลายเดือนก่อน +1

    This is so awesome. With the new gemma LLM, I wanted to load that model in. Thank you!

  • @AKSTEVE1111
    @AKSTEVE1111 9 หลายเดือนก่อน

    It worked like a charm, thank you! Just need to look at the model with my web browser.

  • @nicolasayastuy
    @nicolasayastuy 13 วันที่ผ่านมา

    Hey Sam, thanks for this easy to follow video, Is there a version of this for when Ollama and OpenWebUI are running inside Docker containers that you can point me to? Thanks!

  • @guanjwcn
    @guanjwcn ปีที่แล้ว +2

    can’t wait for Windows version to try it out.

  • @jasonp3484
    @jasonp3484 8 หลายเดือนก่อน

    Outstanding my friend! I learned a new skill today! Thank you very much for the lesson

  • @ads3453
    @ads3453 14 วันที่ผ่านมา

    Do you have that template text anywhere we can copy it ?

  • @HunterZolomon
    @HunterZolomon 11 หลายเดือนก่อน

    Appreciate this a lot, thanks! The stop parameters in your example don't seem necessary as a default though (even detrimental for some models, they stop halfway through the response), and could be explained a bit more thoroughly. You could do a clip going through the parameters, starting with PARAMETER num_ctx ;)

  • @BikinManga
    @BikinManga 7 หลายเดือนก่อน

    thank you, your example modelfile template. save me from headache of loading yi custom model. it's perfect!

  • @dib9900
    @dib9900 7 หลายเดือนก่อน

    Where can I get expected for the model Parameters & Template values for a given model if the Modelfile is not included with the model to convert to ollama format?
    I'm specifically interested in Embeddings model, not the LLM.
    For example, for this model SFR-Embedding-Mistral-GGUF

  • @jasperyou645
    @jasperyou645 6 หลายเดือนก่อน

    Thank you for your sharing! I just want to know could I just run Jackolope in unquantized version by Ollama? It seems GGUF file is used to store the quantized model.

  • @gammingtoch259
    @gammingtoch259 4 หลายเดือนก่อน

    How can Import this files like a ln -s, a simbolic link but that ollama does it automaticly? is it possible?
    The problem is that is ollama keep in a folder with hashing names, and i need this ggufs model for other program too and want not duplicate

  • @yunomi26
    @yunomi26 ปีที่แล้ว

    Hey so I wanted to build a rag architecture. Can I use one of embedding models from the MTEB and make that model through ollama, then use it withgenerate embedding api of ollama to generate embedding? But then, it can only use the api for the model which is running - and i wanted mistral to generate completion and gte for embedding. how do you think i can solve it?

  • @nicolashuve3558
    @nicolashuve3558 9 หลายเดือนก่อน

    Hey, thanks for that. Where are models located on a mac? I can't seem to find them anywhere.

  • @Pure_Science_and_Technology
    @Pure_Science_and_Technology ปีที่แล้ว

    Do you have a video about setting up a pc for running LLMs? What GPU, memory, what software needed. So on?

  • @t-dsai
    @t-dsai ปีที่แล้ว +1

    Thank you Mr. Witteveen for this helpful video. One question: Is it possible to have the ollama settings directory to a custom place instead of the default "~/.ollama"?

    • @StevenSeiller
      @StevenSeiller ปีที่แล้ว

      +1 🤏 My system drive is small compared to my TBs data drives.

  • @Annachrome
    @Annachrome ปีที่แล้ว

    Thanks for introducing me to Ollama! I am running open-source models on LangChain, but having trouble with models calling (or not using) custom tools appropriately. Would you mind making a tutorial for initializing agents without openai models? Perhaps with prefix/format_instructions/suffix kwargs. 🙏 All the docs, tutorials, deeplearning courses use openai models.... 😢

  • @noob-ep4lx
    @noob-ep4lx 10 หลายเดือนก่อน

    hello! Thank you so much for this video but I ran into a problem, my storage was full halfway through my installation and the progress bar paused (stuck on 53.48%) and whenever I close and re-enter the code, it would check 5 files and skip 5 files and pause there. Is there any way to fix?

    • @samwitteveenai
      @samwitteveenai  10 หลายเดือนก่อน

      I suggest just go in and delete the model files and start again

  • @brunapupoo4809
    @brunapupoo4809 4 หลายเดือนก่อน

    ollama create modelfile -f ./modelfile
    transferring model data 100%
    converting model
    Error: open config.json: file does not exist

  • @savelist1
    @savelist1 ปีที่แล้ว

    Hi Sam wondering you have not done any llama Index videos ?

    • @samwitteveenai
      @samwitteveenai  ปีที่แล้ว

      I have done a few but didn't release them, for a variety of reasons (changing the API etc) I will make some new ones. I do use LlamaIndex for certain work projects and it has some really nice features.

  • @thaithuyw4f
    @thaithuyw4f ปีที่แล้ว

    what is the folder you put the model file?
    I even can't see the primary model folder of ollama. Even I use realpath and which,...

    • @thaithuyw4f
      @thaithuyw4f ปีที่แล้ว

      oh sorry, now I found the model folder of ollama, only files started by sha256.... so I think your download folder is just a somewhere folder.
      but when I run create I get the error, even when using sudo:
      ⠦ transferring context Error: rename /tmp/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c16463723071254256853 /usr/share/ollama/.ollama/models/blobs/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c1646372307: invalid cross-device link
      Do you know why?

    • @samwitteveenai
      @samwitteveenai  ปีที่แล้ว

      not sure why. you have a blob like that, normally it will be a named file and then ollama will make the blob etc. and copy to the right location. Your model (text) file should specify the path to the llama cpp file.

  • @nitingoswami1959
    @nitingoswami1959 ปีที่แล้ว

    It doesn't support multi threading 😭😭

  • @KarlHeinzBrockhausen
    @KarlHeinzBrockhausen ปีที่แล้ว

    icant find any folder with models inside in ubuntu, only temp files,

    • @KratomSyndicate
      @KratomSyndicate 11 หลายเดือนก่อน

      models are located in \usr\share\ollama\.ollama\models\ or in wsl2 located in \\wsl.localhost\Ubuntu\usr\share\ollama\.ollama\models\

  • @nitingoswami1959
    @nitingoswami1959 ปีที่แล้ว

    I have 16 gig ram and Tesla graphic card but ollama is still taking time to generate the answer it seems like it only uses the cpu to do the work but how I can utilise both cpu and GPU simultaneously 🤔🤔

    • @LeftThumbBreak
      @LeftThumbBreak ปีที่แล้ว

      if you're running a tesla graphic card i'm assuming you're running on a linux machine and not a mac. if that's so are you sure you're running the linux distro? i run ollama all the time on gpu equipped servers and its running on the gpu.

    • @nitingoswami1959
      @nitingoswami1959 ปีที่แล้ว

      @@LeftThumbBreak running on Ubuntu but when I send the 1st request using curl and at the same time when I send the 2nd request then it waits for 1st request and then it will process the second one why is it happening is it due to cpu or not having multi threading?

  • @mojeime3
    @mojeime3 3 หลายเดือนก่อน

    Great tutorial. Thanks.

  • @MichealAngeloArts
    @MichealAngeloArts ปีที่แล้ว

    Thanks for sharing this. Is there a link to that model file you show in the video (on github etc)?

    • @samwitteveenai
      @samwitteveenai  ปีที่แล้ว

      I updated it in the description but here it is - huggingface.co/TheBloke/jackalope-7B-GGUF/tree/main

    • @MichealAngeloArts
      @MichealAngeloArts ปีที่แล้ว

      @@samwitteveenai Sorry I didn't mean to ask about the HF model files (the GGUF) but the model 'configuration' file used by Ollama to load the model. Obviously plenty of 'model files' terminology in the loop 😀

    • @responsible-adult
      @responsible-adult ปีที่แล้ว

      Jackalope running wild (template problem? )
      Really liking the the ollama series, but having a Jackalope problem.
      Using the jackalope configuration text file I tried to copy from the video, when I run the subsequent model the "creature" goes into a loop and starting generating questions for itself and answering them. I think it's related to the template.
      Please post the exact known to work configuration file jackalope. Thanks!

    • @samwitteveenai
      @samwitteveenai  ปีที่แล้ว +1

      @@responsible-adult you can take the template from something like mistralorca by loading that model and using "/show template". sounds like you have an error in that or possibly you are using a lower quantized version??

    • @MasonJames
      @MasonJames ปีที่แล้ว

      I'm also stuck on this step. My modfile works, but the resultant model doesn't seem to "converse" well. Not sure how to troubleshoot the modfile specifically.@@MichealAngeloArts

  • @wiltedblackrose
    @wiltedblackrose ปีที่แล้ว

    I've had a lot of issues running ANY model with ollama. It keeps crashing on me. Did you have that too? (Btw, there is an issue open right now...)

    • @samwitteveenai
      @samwitteveenai  ปีที่แล้ว

      so far for me it has been pretty rock solid on 2 macs that I have been running it on.

    • @wiltedblackrose
      @wiltedblackrose ปีที่แล้ว

      @@samwitteveenai So I assume in CPU only mode... That explains it. The issue I was facing was with cuda

  • @julian-fricker
    @julian-fricker ปีที่แล้ว

    Thanks for the great video, you should give LM Studio a try. Makes finding and downloading models easier, can make use of gpu and allows you to run these models with a Chat-GPT compatible API.

  • @RobotechII
    @RobotechII ปีที่แล้ว +1

    Really cool, very Docker-esque!

  • @bimbotsolutions6665
    @bimbotsolutions6665 ปีที่แล้ว

    AWESOME WORK, THANKS A LOT...