ComfyUI Tutorial Series: Ep13 - Exploring Ollama, LLaVA, Gemma Models
ฝัง
- เผยแพร่เมื่อ 27 ก.ย. 2024
- In this episode, we’ll show you how to use the Ollama tool with ComfyUI to run and test models like LLaMA, Gemma, and LLaVA. I’ll guide you through installing Ollama, choosing models, and using them to generate prompts from text and images. Whether you're new to AI or looking for ways to improve your workflow, this tutorial will make things easier for you.
What You'll Learn:
- How to install and use Ollama
- Running models like LLaVA and Gemma in ComfyUI
- Generating text and image prompts easily
Join us and explore new possibilities with these models!
Get the workflows and instructions from discord
/ discord
Unlock exclusive perks by joining our channel:
/ @pixaroma
#comfyui #ollama
---
Install Ollama from
ollama.com/
Install these custom nodes if you don't have it
ComfyUI Ollama created by stavsap
ComfyUI Easy Use
Restart ComfyUI
---
If you open a command window - start, run, cmd and press enter
you can check how many models you have installed using command
ollama list
To remove a model you can use
ollama rm model_name
After installed a model you can talk to it in command window, to exit use the command
/bye
---
!!! IMPORTANT
If you are using the LLM in ComfyUI, it requires the Ollama server to run. You can start it by opening the Ollama app. Keep in mind that while Ollama is running, it uses VRAM. If you are not using Ollama, simply quit it from the taskbar notification area by right-clicking on the Ollama icon and selecting "Quit."
Thanks for this tutorial and your Discord channel. The knowledge we gain from your efforts are are so welcome.
Thank you for all the help 🙂 ⚔ Legends
"wow, this video was a game changer!🔥 integrating LLM models and how incredibly useful they are for enhancing workflows. The detailed explanations and step-by-step guidance made everything super clear. Thank you so much for sharing this valuable knowledge - it’s really helped me level up my understanding of AI! 🙌 Keep up the amazing work!"thankyou
Thanks for support 😊
yet another great video pixaroma!
(this is not sponsored comment)
thanks 😊
Another amazing video. I'm always learning something new and one of the best AI videos!
I got some really cool results with Ollama vision and controlnet.
Hi, consider setting the keep alive setting to 0 for ollama in the node, once you have it incorporated into an image producing workflow. Ollama is then not kept in memory. This may help. Pretty certain the nodes GitHub page explains this.
If you replace run with pull, ollama will just download the model.
Enjoyed the video.
Thank you ☺️ you created the node? Your name sounds familiar 😁
@@pixaroma no just a llm node user. I've commented before... 😅
Hi I follow your channel and it's the best for AI. An Inpainting tutorial in COmfy with both SDXL (maybe incorporating ip adapters and controlnets) and Flux would be amazing! There's a lot of people on youtube doing it in completely different manners
I will see what i can do, you can do in many ways the same thing in comfyui so people usually find what works for then and their system
Thanx. Used it before, but now I'm using Searge LLM + Florence. Easier to use, no need to install additional service.
Yes that I was using in episode 11, just some people could not install searge.
nice episode! (your BEST one for now!!!) TY sooo much for it ;)
Thanks for this tutorial
Thanks you! So many gems and nuggets of wisdom in this video and on discord.
Thank you so much for support, glad it helps 🙂
Great work again👏👍
Do you have knowledge on Lora Trainings for Comfy UI?
I don't have enough knowledge, i did a few months ago some loras for fun, you can train also online like on civit ai or tensor art, for flux local there is fluxgym , search those terms
suggest to use Groq API / flowise too =)
I assume that is not free, i mean i could use chatgpt also;) just was looking for a free local version
I would be ever so grateful if you could create a workflow video that demonstrates how to change the production photo background, adjust lighting, and choose a suitable background image in ComfyUI. It would be incredibly helpful to see how to use MultiLatentComposite and Light Vector to effectively use a lantern image size and move the product around to follow the rule of thirds. If it's possible, could you also show how to apply these techniques to similar backgrounds? A video covering these topics would be truly valuable and I would really appreciate it.
@@longsyee Not sure If i can do that advance, but I need something like that for mockups, so if I am able to do it in the future I will do a video about it
@@pixaroma i have some findings on .. BiRefNet, ViTMatte, IC-Light, ResAdapter not sure if it helps.
Hi, I found your older video on how to convert sketches to AI art on your channel using A1111, but most of your new videos use ComfyUI and my friend also said ComfyUI is better. Is there an updated way to do it on ComfyUI or should I just stick with the method on the A1111 video?
I will try to do a video on that for comfyui, i plan to do all the things i did on a1111 and forge in comfyui
@@pixaroma Woah that was fast thanks
A word of warning in using LLMs to generate prompts and captions for training. Every time you query the LLM with an image it will give you a slightly different answer. It is not consistent.
Yes it changes. I use chatgpt for captions
Why does it say " Preview " for windows ? What do they mean by that ? Is it like a demo version ?
The term "Preview" for Ollama on Windows means it's an early version of the software that is still being tested and refined. It's not exactly a demo, but rather a pre-release that allows users to try out the software and provide feedback before the final stable version is released. This version is fully functional and includes features like GPU acceleration and access to the Ollama model library, but it may still have bugs or incomplete features as the team gathers user feedback and makes improvements.
@@pixaroma oh, I got you !!!
Downloading this app totally jacked my computer. I couldn't get into my downloads folder at all. It was not responding. This was just on downloading the app, not even installing. I had to optimize my downloads folder, so if this happens to you, I would go to this link for the fix th-cam.com/video/8yIqbKvE9jI/w-d-xo.html Hope it's cool I posted this, because I was pulling my hair out so thought I would share fix
Never had a problem but i never download the downloads folder i set chrome to always ask where to put it. Probably that could have happened with other download also
@pixaroma not sure, but it's the first time it ever happened to me, and I couldn't get into the folder. It's an easy fix when you know what to do, though so wanted to share to save time for someone maybe 👍
Thanks ☺️
Even though it was installed, it still doesn't recognize the model in the model section of comfyui, why?
Hearing your AI voice pronounce it "Olah-mah" instead of "O-Llama" is so stinkin' cute! 😂 It's like when kids say "pasghetti" 😊❤
What strike me more is how english is unfit to represent phonemes, as a native speaker (I suppose) you're have to resort to adding characters to represent different vowel sounds and dashes to describe where stress goes. In Italian I would write those words as "olama" and "ollama“, and no one can misinterpret the spelling or stress.
No way! All this time I thought it was a real voice! Cant be Ai voice ?
☺️
is there any alternative opensource model like midjourny for Graphic designer, design inspiration?
With flux 1 dev model you can do pretty much like in mid journey
❤
awesome!
Searge LLM was short-lived 😂 Now, I am moving Ollama and models to an alternate partition (C: is a bit overcrowded). Thanks for this all.
I didn't try to move, check github.com/ollama/ollama/issues/2551 let me know if works 😁
Thank you for your tutorial.