Ollama - Loading Custom Models

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ต.ค. 2024

ความคิดเห็น • 47

  • @5Komma5
    @5Komma5 8 หลายเดือนก่อน +5

    That worked. Thanks.
    If the Model page lacks information and there is a similar model available you can get the modelfile by loading the model and using
    ollama show --modelfile

    • @samwitteveenai
      @samwitteveenai  8 หลายเดือนก่อน +1

      yes they have added some nice commands since I made this video. You can also make changes now when using the model is running and export those to the setup/modelfile I will try to make a new updated video

    • @MasonJames
      @MasonJames 8 หลายเดือนก่อน +1

      Would also love to see a new version - I've referenced this one several times @@samwitteveenai thank you!

  • @the_real_cookiez
    @the_real_cookiez 7 หลายเดือนก่อน +1

    This is so awesome. With the new gemma LLM, I wanted to load that model in. Thank you!

  • @jasonp3484
    @jasonp3484 5 หลายเดือนก่อน

    Outstanding my friend! I learned a new skill today! Thank you very much for the lesson

  • @AKSTEVE1111
    @AKSTEVE1111 5 หลายเดือนก่อน

    It worked like a charm, thank you! Just need to look at the model with my web browser.

  • @BikinManga
    @BikinManga 4 หลายเดือนก่อน

    thank you, your example modelfile template. save me from headache of loading yi custom model. it's perfect!

  • @guanjwcn
    @guanjwcn 11 หลายเดือนก่อน +2

    can’t wait for Windows version to try it out.

  • @HunterZolomon
    @HunterZolomon 7 หลายเดือนก่อน

    Appreciate this a lot, thanks! The stop parameters in your example don't seem necessary as a default though (even detrimental for some models, they stop halfway through the response), and could be explained a bit more thoroughly. You could do a clip going through the parameters, starting with PARAMETER num_ctx ;)

  • @RobotechII
    @RobotechII 11 หลายเดือนก่อน +1

    Really cool, very Docker-esque!

  • @carlosparica8131
    @carlosparica8131 9 หลายเดือนก่อน

    Hello Mr Witteveen. Thanks for the informative video! May I request if you could do a more indepth explanation on what a model file is and how it works, more specifically on TEMPLATE?

  • @Annachrome
    @Annachrome 11 หลายเดือนก่อน

    Thanks for introducing me to Ollama! I am running open-source models on LangChain, but having trouble with models calling (or not using) custom tools appropriately. Would you mind making a tutorial for initializing agents without openai models? Perhaps with prefix/format_instructions/suffix kwargs. 🙏 All the docs, tutorials, deeplearning courses use openai models.... 😢

  • @t-dsai
    @t-dsai 11 หลายเดือนก่อน +1

    Thank you Mr. Witteveen for this helpful video. One question: Is it possible to have the ollama settings directory to a custom place instead of the default "~/.ollama"?

    • @StevenSeiller
      @StevenSeiller 10 หลายเดือนก่อน

      +1 🤏 My system drive is small compared to my TBs data drives.

  • @jasperyou645
    @jasperyou645 3 หลายเดือนก่อน

    Thank you for your sharing! I just want to know could I just run Jackolope in unquantized version by Ollama? It seems GGUF file is used to store the quantized model.

  • @gammingtoch259
    @gammingtoch259 หลายเดือนก่อน

    How can Import this files like a ln -s, a simbolic link but that ollama does it automaticly? is it possible?
    The problem is that is ollama keep in a folder with hashing names, and i need this ggufs model for other program too and want not duplicate

  • @dib9900
    @dib9900 4 หลายเดือนก่อน

    Where can I get expected for the model Parameters & Template values for a given model if the Modelfile is not included with the model to convert to ollama format?
    I'm specifically interested in Embeddings model, not the LLM.
    For example, for this model SFR-Embedding-Mistral-GGUF

  • @julian-fricker
    @julian-fricker 11 หลายเดือนก่อน

    Thanks for the great video, you should give LM Studio a try. Makes finding and downloading models easier, can make use of gpu and allows you to run these models with a Chat-GPT compatible API.

  • @nicolashuve3558
    @nicolashuve3558 6 หลายเดือนก่อน

    Hey, thanks for that. Where are models located on a mac? I can't seem to find them anywhere.

  • @brunapupoo4809
    @brunapupoo4809 29 วันที่ผ่านมา

    ollama create modelfile -f ./modelfile
    transferring model data 100%
    converting model
    Error: open config.json: file does not exist

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology 11 หลายเดือนก่อน

    Do you have a video about setting up a pc for running LLMs? What GPU, memory, what software needed. So on?

  • @yunomi26
    @yunomi26 9 หลายเดือนก่อน

    Hey so I wanted to build a rag architecture. Can I use one of embedding models from the MTEB and make that model through ollama, then use it withgenerate embedding api of ollama to generate embedding? But then, it can only use the api for the model which is running - and i wanted mistral to generate completion and gte for embedding. how do you think i can solve it?

  • @MichealAngeloArts
    @MichealAngeloArts 11 หลายเดือนก่อน

    Thanks for sharing this. Is there a link to that model file you show in the video (on github etc)?

    • @samwitteveenai
      @samwitteveenai  11 หลายเดือนก่อน

      I updated it in the description but here it is - huggingface.co/TheBloke/jackalope-7B-GGUF/tree/main

    • @MichealAngeloArts
      @MichealAngeloArts 11 หลายเดือนก่อน

      @@samwitteveenai Sorry I didn't mean to ask about the HF model files (the GGUF) but the model 'configuration' file used by Ollama to load the model. Obviously plenty of 'model files' terminology in the loop 😀

    • @responsible-adult
      @responsible-adult 11 หลายเดือนก่อน

      Jackalope running wild (template problem? )
      Really liking the the ollama series, but having a Jackalope problem.
      Using the jackalope configuration text file I tried to copy from the video, when I run the subsequent model the "creature" goes into a loop and starting generating questions for itself and answering them. I think it's related to the template.
      Please post the exact known to work configuration file jackalope. Thanks!

    • @samwitteveenai
      @samwitteveenai  11 หลายเดือนก่อน +1

      @@responsible-adult you can take the template from something like mistralorca by loading that model and using "/show template". sounds like you have an error in that or possibly you are using a lower quantized version??

    • @MasonJames
      @MasonJames 11 หลายเดือนก่อน

      I'm also stuck on this step. My modfile works, but the resultant model doesn't seem to "converse" well. Not sure how to troubleshoot the modfile specifically.@@MichealAngeloArts

  • @savelist1
    @savelist1 11 หลายเดือนก่อน

    Hi Sam wondering you have not done any llama Index videos ?

    • @samwitteveenai
      @samwitteveenai  11 หลายเดือนก่อน

      I have done a few but didn't release them, for a variety of reasons (changing the API etc) I will make some new ones. I do use LlamaIndex for certain work projects and it has some really nice features.

  • @noob-ep4lx
    @noob-ep4lx 6 หลายเดือนก่อน

    hello! Thank you so much for this video but I ran into a problem, my storage was full halfway through my installation and the progress bar paused (stuck on 53.48%) and whenever I close and re-enter the code, it would check 5 files and skip 5 files and pause there. Is there any way to fix?

    • @samwitteveenai
      @samwitteveenai  6 หลายเดือนก่อน

      I suggest just go in and delete the model files and start again

  • @nitingoswami1959
    @nitingoswami1959 11 หลายเดือนก่อน

    It doesn't support multi threading 😭😭

  • @thaithuyw4f
    @thaithuyw4f 10 หลายเดือนก่อน

    what is the folder you put the model file?
    I even can't see the primary model folder of ollama. Even I use realpath and which,...

    • @thaithuyw4f
      @thaithuyw4f 10 หลายเดือนก่อน

      oh sorry, now I found the model folder of ollama, only files started by sha256.... so I think your download folder is just a somewhere folder.
      but when I run create I get the error, even when using sudo:
      ⠦ transferring context Error: rename /tmp/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c16463723071254256853 /usr/share/ollama/.ollama/models/blobs/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c1646372307: invalid cross-device link
      Do you know why?

    • @samwitteveenai
      @samwitteveenai  10 หลายเดือนก่อน

      not sure why. you have a blob like that, normally it will be a named file and then ollama will make the blob etc. and copy to the right location. Your model (text) file should specify the path to the llama cpp file.

  • @KarlHeinzBrockhausen
    @KarlHeinzBrockhausen 9 หลายเดือนก่อน

    icant find any folder with models inside in ubuntu, only temp files,

    • @KratomSyndicate
      @KratomSyndicate 8 หลายเดือนก่อน

      models are located in \usr\share\ollama\.ollama\models\ or in wsl2 located in \\wsl.localhost\Ubuntu\usr\share\ollama\.ollama\models\

  • @bimbotsolutions6665
    @bimbotsolutions6665 10 หลายเดือนก่อน

    AWESOME WORK, THANKS A LOT...

  • @nitingoswami1959
    @nitingoswami1959 11 หลายเดือนก่อน

    I have 16 gig ram and Tesla graphic card but ollama is still taking time to generate the answer it seems like it only uses the cpu to do the work but how I can utilise both cpu and GPU simultaneously 🤔🤔

    • @LeftThumbBreak
      @LeftThumbBreak 11 หลายเดือนก่อน

      if you're running a tesla graphic card i'm assuming you're running on a linux machine and not a mac. if that's so are you sure you're running the linux distro? i run ollama all the time on gpu equipped servers and its running on the gpu.

    • @nitingoswami1959
      @nitingoswami1959 11 หลายเดือนก่อน

      @@LeftThumbBreak running on Ubuntu but when I send the 1st request using curl and at the same time when I send the 2nd request then it waits for 1st request and then it will process the second one why is it happening is it due to cpu or not having multi threading?

  • @wiltedblackrose
    @wiltedblackrose 11 หลายเดือนก่อน

    I've had a lot of issues running ANY model with ollama. It keeps crashing on me. Did you have that too? (Btw, there is an issue open right now...)

    • @samwitteveenai
      @samwitteveenai  11 หลายเดือนก่อน

      so far for me it has been pretty rock solid on 2 macs that I have been running it on.

    • @wiltedblackrose
      @wiltedblackrose 11 หลายเดือนก่อน

      @@samwitteveenai So I assume in CPU only mode... That explains it. The issue I was facing was with cuda