Great video, But ive always been curious where i can find a detailed list or video of each of the gguf models for the required or recommended VRAM. All same for the clip models would be great to find.
Hello! Thanks for your great tuto. I have a question about your workflow, how do you create negative prompt with your workflow? I mean I see the CLIP Text encode is connected to the FluxGuidance node, but I don't get how to use negative prompt also.
@@ThierryPianoSolo you're welcome bro. Negative prompts in flux don't work properly at this time , but in workflow you have to connect an empty textclip node , otherwise you will get an error
Hi sorry to bother but I cant add my own loras that I created in ostris replicate to comfyui, are the files you created saved in hugging face or in your on your folders? Im so confused lol I'm new to this
One more thing, how do you add two different loras in one scene? because when I use my lora (myself) and a another character they end up looking like me? Does this have to do with inpainting or masking? xD
When I run with LORA enabled (both this and your last tutorial) I get an Out of Memory. If i just bypass the lora, it works fine. Any tips on how to get lora working?
@@BrunoOrsolon your system is good enough I think something is wrong with your GPU because AMD GPU's are not well compatible with ai models. Try this, increase your virtual memory size in windows in performance option section or use other models for flux
standard setup, static like noised image. On upscale It says you have to Reconnect The ultimate upscale and you cannot match the numbers that you have on the original node. if I use default I get : C:\Users\walte\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_tensor.py", line 1443, in __torch_function__ ret = func(*args, **kwargs) RuntimeError: start (24) + length (1) exceeds dimension size (24).
@@Jockerai my bad! i thoght its was the NF4 model.. regardless still didnt work haha .. any help please: Prompt outputs failed validation DualCLIPLoaderGGUF: - Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q4_K_M.gguf' not in ['clip_l.safetensors', 't5-v1_1-xxl-encoder-Q5_K_M.gguf']
Hello, does anyone know why, after updating ComfyUI to add Flux, the "Clip Text Encode" icons appeared in red? I can search for them, but I can't add them, and it's the same with "Clip Text Encode Flux Edit: the problem was in everywhere node Delete it and you will fix the problem
I read something about the 10 and 20 series rtx cards not liking the lower floating points that these ggufs are converted into. Where you download comfui gguf there is a link about graphics card recommendations for using it. 30 and 40 series are what it recommends.
@@souravroy8834 if you have Manager installed click on that and click install missing costume nodes. If you haven't check my previous video for installing flux in comfyUi and install manager for comfyUi
I HAVE NEW ERROR ISSUE "newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead." I DO I WILL FIX
"Hello and thanks for your help! i managed to move forward using the command "python_embeded\python.exe -m pip install numpy==1.26.4" I put this command into D:\ComfyUI_windows_portable (if this helps anyone who is having the same issue as me)" - Compatibility issues with newbyteorder #5451 github
I liked the video anyway because this is good info but the links you posted lead to a fishing site, both malwarebytes and chrome and opera blocked them. Please avoid using sites that are harmful to your viewers
@@TheMusicLoversx unfortunately flux models don't accept negative prompts. Even if you add them in your workflow they don't work. Some of the users made few complex workflows for negative prompts but I tested them and 90% failed.
thanks you but i have faced this problrm (Prompt outputs failed validation FluxLoraLoader: - Value not in list: lora_name: 'realism_lora.safetensors' not in [] DualCLIPLoaderGGUF: - Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q4_K_M.gguf' not in clip_l.safetensors', 't5-v1_1-xxl-encoder-Q3_K_L.gguf do you know how to fix it ? thx .
click the link below to stay update with latest tutorial about Ai 👇🏻 :
www.youtube.com/@Jockerai?sub_confirmation=1
Oh men, your voice is so clearly, I'm a no native english speaker and I understand everything!
Greetings from Mexico city!
that was very detailed and clean so thank you so much
@@legatoasdi3894 thank you 😉✨
None of the links seem to be working to download the workflows or the vae files. Can you advise please?
Nevermind the ssl error went away. Looks good now. thanks for the tutorial
@@robertwilson332 I checked it and it work. But I will change the links so that everyone can open them
@@robertwilson332 you're welcome bro 😉
Great video, But ive always been curious where i can find a detailed list or video of each of the gguf models for the required or recommended VRAM. All same for the clip models would be great to find.
you are making perfect videos for Flux
@@fahimabdulaziz4255 kind of you mate🤡❤️
You should use a fixed seed in order to compair two models results
🔥🔥🔥
@@AgustinCaniglia1992 😍🤡✨
For people with Radeon cards, should we also install ZLUDA?
Hello! Thanks for your great tuto. I have a question about your workflow, how do you create negative prompt with your workflow? I mean I see the CLIP Text encode is connected to the FluxGuidance node, but I don't get how to use negative prompt also.
@@ThierryPianoSolo you're welcome bro. Negative prompts in flux don't work properly at this time , but in workflow you have to connect an empty textclip node , otherwise you will get an error
thanks for the content !
you're welcome✨💚
I also have "borderline" 3060 12GB and ask which version do you use, low or normal ComfyUI?
i have error "CLIPTextEncode
'NoneType' object has no attribute 'device' " . How to fix this?
your youTube channel is growing
@@fahimabdulaziz4255 it's my pleasure that I can feel connected to more of you✨
Hi sorry to bother but I cant add my own loras that I created in ostris replicate to comfyui, are the files you created saved in hugging face or in your on your folders? Im so confused lol I'm new to this
@@thebuniverse6739 haven't them saved in your hugging face account?
@@Jockerai its on my huggingface but i cant find the find, does it end with .safetensor?
never mind i found it, thanks for your help joker! Also can you do tutorial of masking and in painting, that would be cool for newbies xD
One more thing, how do you add two different loras in one scene? because when I use my lora (myself) and a another character they end up looking like me? Does this have to do with inpainting or masking? xD
For those of you getting errors about the Flux lora here is the directory to put it: ComfyUI\models\xlabs\loras
i tried this and it still doesnt work for me. any help?
When I run with LORA enabled (both this and your last tutorial) I get an Out of Memory. If i just bypass the lora, it works fine. Any tips on how to get lora working?
what is your PC config?
@@Jockerai Ryzen 7800X3D - AMD RADEON 7700XT -- 64GB ram
@@BrunoOrsolon your system is good enough I think something is wrong with your GPU because AMD GPU's are not well compatible with ai models. Try this, increase your virtual memory size in windows in performance option section or use other models for flux
is this model faster than the nf4 v2?
@@rennynolaya2457 yes it is. I think 3 or four timesfaster
thank you so much
@@fahimabdulaziz4255 you're welcome bro 😉✨
standard setup, static like noised image. On upscale It says you have to Reconnect The ultimate upscale and you cannot match the numbers that you have on the original node. if I use default I get : C:\Users\walte\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_tensor.py", line 1443, in __torch_function__
ret = func(*args, **kwargs)
RuntimeError: start (24) + length (1) exceeds dimension size (24).
Gtx 1060 6gb but not working for me.. any alternatives? :'(
@@noureddinegdn3850 it has to work download Q4 version
@@Jockerai my bad! i thoght its was the NF4 model.. regardless still didnt work haha .. any help please:
Prompt outputs failed validation
DualCLIPLoaderGGUF:
- Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q4_K_M.gguf' not in ['clip_l.safetensors', 't5-v1_1-xxl-encoder-Q5_K_M.gguf']
@@Jockerai also there is a bi red box around the flux lora loder
Hello, does anyone know why, after updating ComfyUI to add Flux, the "Clip Text Encode" icons appeared in red? I can search for them, but I can't add them, and it's the same with "Clip Text Encode Flux
Edit: the problem was in everywhere node
Delete it and you will fix the problem
Thanks
@@naeemulhoque1777 you're welcome ✨😉
Can this system work with an Nvidia 1080 graphics card with 8GB VRAM?
16GB RAM?
With the 4 GUFF model, it also crashes while working with ComfyUI.
@@vahid23344 ba graphic 8 gig mitoone vali 1080 ro test nakardam
does is it work?
@@vahid23344 I guess it works but with Q4 model because 1080 GPU is a bit old one and can handle small models
@@RKFlago maybe with Q4
I read something about the 10 and 20 series rtx cards not liking the lower floating points that these ggufs are converted into. Where you download comfui gguf there is a link about graphics card recommendations for using it. 30 and 40 series are what it recommends.
where is clip_l link i cant fine anywhere ? please provide the link
@@souravroy8834 huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors?download=true
@@Jockerai brother i found the clip now it gives me error sampler custom advanced . what to do .
@@souravroy8834 if you have Manager installed click on that and click install missing costume nodes. If you haven't check my previous video for installing flux in comfyUi and install manager for comfyUi
I HAVE NEW ERROR ISSUE "newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead." I DO I WILL FIX
"Hello and thanks for your help! i managed to move forward using the command "python_embeded\python.exe -m pip install numpy==1.26.4"
I put this command into D:\ComfyUI_windows_portable (if this helps anyone who is having the same issue as me)" - Compatibility issues with newbyteorder #5451 github
@@the_smad thanks for your help, then i have another problem "CLIPTextEncode
'NoneType' object has no attribute 'device' "
@ReelEchoes You have probably selected vae model in DualClipLoader.
That should be clip_L model.
@@the_smad Thanks You SO much
my system 16gm ram & 8 gb dedicated intem iris xe GPU, core i5 11 gen 11300H processor ,what model should u prefer for my system
workflow link doesn´t work.
I fixed it download again plz
@@Jockerai Thanks works fine.
@@IBex__ you're welcome
wher i find clip l file
@@istreem-nehal in the description "download text encoders"
@@Jockerai only t5-v1_1-xxl-encoder-gguf is in download text encoders, no clip-l
@@DavidDoyleOutdoors I added it in description. Please check it out again
@@Jockerai thank you
دمت گرم دایی
@@motion_time thank you mate ✨
I liked the video anyway because this is good info but the links you posted lead to a fishing site, both malwarebytes and chrome and opera blocked them. Please avoid using sites that are harmful to your viewers
@@badimagerybyjohnromine you're welcome. I'm gonna fix links
where are the negative prompts or how can i add them?
@@TheMusicLoversx unfortunately flux models don't accept negative prompts. Even if you add them in your workflow they don't work.
Some of the users made few complex workflows for negative prompts but I tested them and 90% failed.
got 6900txt and out of memory on flux xD
dame shoma garm
@@vahid23344 fadaye you😉✨
thanks you but i have faced this problrm (Prompt outputs failed validation FluxLoraLoader: - Value not in list: lora_name: 'realism_lora.safetensors' not in []
DualCLIPLoaderGGUF:
- Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q4_K_M.gguf' not in clip_l.safetensors', 't5-v1_1-xxl-encoder-Q3_K_L.gguf do you know how to fix it ? thx .
I found a fix for this. You have to downolad the lora and put it in this folder: ComfyUI\models\xlabs\loras