You can now support the channel and unlock exclusive perks by becoming a member: th-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.
I like to keep the workflow simple, unless you need something specific that needs that function. As you saw i compared full dev with that complex workflow with the q8! Dev that doesn't have that, and the results were pretty much the same
Are you familiar with the hyper-sd Lora’s for flux that were recently released? I tried to set it up but obviously don’t know enough because I just got noise
With a 4099 isn’t there a local text to speech you could use that is nearly as good was what you’ve been paying for? I don’t find it terribly natural anyways. Thanks for all the tuts!
There are, tried some but didn't find a method that sounds as good. I am sure in a year we will have something as good for free, but what i tested so far didn't like it
I am not an expert on this, I use q8 for example. The letters in the model names like Q5_K_S refer to different aspects of the model's quantization and optimization: - Q5: Refers to a 5-bit quantization level, balancing performance and accuracy. Higher bits like Q8 offer more accuracy, while lower bits like Q4 focus on speed and efficiency. - K_S: This indicates grouped (K) and stochastic (S) quantization, which applies grouping and randomness to further optimize model performance with reduced memory use. - 1 and 0: These numbers differentiate slight variations of the model, where "1" generally keeps more accuracy, and "0" focuses on faster inference. For example, Q4_K_S would be a 4-bit quantized version using grouped and stochastic methods, suitable for faster but slightly less accurate outputs compared to Q5 or Q8
@@pixaroma si cata memorie video? :) Bine, toti dau speed la video, e normal, dar m-am dat pe spate cand am vazut 1.23s/it :))) Eu mai rapid de 4.92s/it cu gguf n-am reusit, dar n-am decat un 3060 cu 12gb :/
I tried Flux GGUF last night and it took 10 minutes to render a 1024x1024 image compared to the 10 seconds it takes in SDXL. My RTX 3070 days are numbered lol
I also tested with this model that is a mix of dev and schnell only need 4 steps just like schnell, again 30 seconds it takes on 6gb rtx2060 civitai.com/models/657607?modelVersionId=745392 you put it in the unet folder, just extract it first since is in a archive. I got the q4_0 version v2
Bad confy bro. I have 3060 8gb and gguf unet model is so fast, 2 min max for img 720x1280. Also update cuda . If not, just reinstal bro. Gguf is so fast just cant believe it
@@sania3631 There are so many different versions of Flux. I probably was using the wrong one? Idk, I haven't been able to find anything saying how quick a 3070 can render a Flux image. I switched back to Forge and see that it can also run Flux. I'll try the N4F version instead of GGUF, i hear it's faster.
I saw people using it but the quality is not so great, i was hoping for a better release before we do video with comfy UI and there is still more to cover on the image before we switch to video. Once something works ok I will do videos for it
thank you for this valuable list of tutorials, what is the best settings and models to generate images by using comfyui ? i have gtx 1060 6gb and 16gb ram?
Flux is a little much for your system, i got some crash on the gguf bigger then q4 on rtx2060 6gb but is an rtx not gtx and I have 64 gb of system ram. So what works fast for you are older version of sd1.5, then sdxl like juggernaut x hyper also works ok for me on that pc. For flux only if you test to see if can handle it or not, if doesn't crash it can take some time to generate in special first time. I used on that pc the schnell fp8 just like in episode 8 and it run ok for me
I saw you were using ChatGPT to copy/paste text prompts? The styles dropdown in episode 7/8 were amazing. Recently I saw a couple of TH-cam videos where people are using LLMs like Ollama or Sarge right within the ComfyUI workflow to transform a simple sentence to a descriptive text prompt. Are you familiar with this and would you plan to explore this is a future video?
You can now support the channel and unlock exclusive perks by becoming a member:
th-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.
Perfect tutorial, thanks to this one I rejiggled some of my older work flows with excellent results and congrats on the 10.2K subscribers.
thank you man
as always great tutorial
Excellent tutorial as always. Hope you stick around for a while as more models are on the way (I assume). :)
Great tutorial and very knowledgeable!
Amazing tutorial!
Thank you! Great video. Much appreciated.
hey tnx for the greate tutorial, for the blury images you guys can set the sampler to euler and scheduler to beta
Did it fix all the blurred images, for me only sometimes, also helps doing 30 steps, and sometimes flux.guidance like 2
niiice! Ep010 already
Thank you. 🙏
Love your videos, What is the best inpainting on comfyui (not for flux). Are you going to make a video of it?
I am going to make a video about that also,, since i need to test different models, same for upscaling
nice ! What about the modelsamplingflux ? You don't use max shift /base shift parameters ?
I like to keep the workflow simple, unless you need something specific that needs that function. As you saw i compared full dev with that complex workflow with the q8! Dev that doesn't have that, and the results were pretty much the same
Is there any style csv for flux just like for SDXL? Looking Forward
You can use it with Flux also, just flux don't know how to do so many art styles like sdxl
I get same error, still trying to find a solution. Great work bro
Let me know on discord what problem you have
Are you familiar with the hyper-sd Lora’s for flux that were recently released?
I tried to set it up but obviously don’t know enough because I just got noise
With a 4099 isn’t there a local text to speech you could use that is nearly as good was what you’ve been paying for? I don’t find it terribly natural anyways.
Thanks for all the tuts!
I saw it yesterday but didn't test it yet
There are, tried some but didn't find a method that sounds as good. I am sure in a year we will have something as good for free, but what i tested so far didn't like it
Got here out of curiosity… could be speaking another language not on earth to me… but interested in learning.
whats the difference between the models flux1-dev-Q5_K_S.gguf flux1-dev-Q5_1.gguf flux1-dev-Q5_0.gguf like what does the k_s and 1 and 0 mean
I am not an expert on this, I use q8 for example. The letters in the model names like Q5_K_S refer to different aspects of the model's quantization and optimization:
- Q5: Refers to a 5-bit quantization level, balancing performance and accuracy. Higher bits like Q8 offer more accuracy, while lower bits like Q4 focus on speed and efficiency.
- K_S: This indicates grouped (K) and stochastic (S) quantization, which applies grouping and randomness to further optimize model performance with reduced memory use.
- 1 and 0: These numbers differentiate slight variations of the model, where "1" generally keeps more accuracy, and "0" focuses on faster inference.
For example, Q4_K_S would be a 4-bit quantized version using grouped and stochastic methods, suitable for faster but slightly less accurate outputs compared to Q5 or Q8
Ce placa video folosesti de ai vitezele alea? :)
Rtx4090 , mai dau speed la video dar cam 15 sec ia la flux sa genereze
@@pixaroma si cata memorie video? :) Bine, toti dau speed la video, e normal, dar m-am dat pe spate cand am vazut 1.23s/it :))) Eu mai rapid de 4.92s/it cu gguf n-am reusit, dar n-am decat un 3060 cu 12gb :/
@@andreizdetovetchi 24gb vram, si 128gb system ram, i9
I tried Flux GGUF last night and it took 10 minutes to render a 1024x1024 image compared to the 10 seconds it takes in SDXL. My RTX 3070 days are numbered lol
Try to update comfyui, i just tested now q4 version on rtx2060 6GB vram, it took 30 sec for schnell q4 and around 200 seconds for dev q4
I also tested with this model that is a mix of dev and schnell only need 4 steps just like schnell, again 30 seconds it takes on 6gb rtx2060 civitai.com/models/657607?modelVersionId=745392 you put it in the unet folder, just extract it first since is in a archive. I got the q4_0 version v2
Bad confy bro. I have 3060 8gb and gguf unet model is so fast, 2 min max for img 720x1280. Also update cuda . If not, just reinstal bro. Gguf is so fast just cant believe it
Something is wrong with your Comfyui. Maybe update it.
@@sania3631 There are so many different versions of Flux. I probably was using the wrong one? Idk, I haven't been able to find anything saying how quick a 3070 can render a Flux image. I switched back to Forge and see that it can also run Flux. I'll try the N4F version instead of GGUF, i hear it's faster.
Can you make a video on how to download CogVideoX-5B with ComfyUI. Btw your channel is very useful, keep the good work
I saw people using it but the quality is not so great, i was hoping for a better release before we do video with comfy UI and there is still more to cover on the image before we switch to video. Once something works ok I will do videos for it
@@pixaroma i understand, thank you
1st 🫡
AMAZING!!!!! NICE NICE!!!!
thank you for this valuable list of tutorials, what is the best settings and models to generate images by using comfyui ? i have gtx 1060 6gb and 16gb ram?
Flux is a little much for your system, i got some crash on the gguf bigger then q4 on rtx2060 6gb but is an rtx not gtx and I have 64 gb of system ram. So what works fast for you are older version of sd1.5, then sdxl like juggernaut x hyper also works ok for me on that pc. For flux only if you test to see if can handle it or not, if doesn't crash it can take some time to generate in special first time. I used on that pc the schnell fp8 just like in episode 8 and it run ok for me
I saw you were using ChatGPT to copy/paste text prompts?
The styles dropdown in episode 7/8 were amazing.
Recently I saw a couple of TH-cam videos where people are using LLMs like Ollama or Sarge right within the ComfyUI workflow to transform a simple sentence to a descriptive text prompt.
Are you familiar with this and would you plan to explore this is a future video?
@@SebAnt actually that is the subject on episode 11 :) I am still working on it and testing, but should be ready next week
@@pixaroma