Hi! Awesome introduction into the new Flux Tools! Instead of two ConditioningSetTimestepRange you can also combine two ConditioningSetAreaStrength. The strength of the two ConditioningSetAreaStrength should always sum up to 1. This setup gave me much more control over the results.
think to update comfyUI before using all of this (you probably said it in the video but I was too hurry to try ^^ ) or you will have some errors (or the loras will do nothing)
The Fill model tbh is a bit of a dud. You get very similar results just using diff diff and the base flux model, i assume that is why they dont compare to that. Kinda silly not to compare to your base model doing inpainting, they clearly not 'happy' that their fill model is most likely only a tiny bit better than the base model.
There's a link to it on the ComfyUI_examples page in the "Redux" section. It's in the Comfy-Org hugging face page in the sigclip_vision_384 repository.
Nerdy rodent really makes my day, showing us AI in a really British way.
Been following your channel for almost 3 years, thank you to always deliver, Roden!
Hi! Awesome introduction into the new Flux Tools!
Instead of two ConditioningSetTimestepRange you can also combine two ConditioningSetAreaStrength. The strength of the two ConditioningSetAreaStrength should always sum up to 1. This setup gave me much more control over the results.
Nice!
Awesome tutorial! However when i use the flux fill model, i am getting very noisy images... especially noticable when zoomed in..any tips?
Huge thanks for bringing this content so quick! God bless you!
Great! Using controlnets with redux makes it way better and versatile
please explain, I get errors when I do everything as in the video. error in "Load Style Model" for Apply Style Model
@@raphaild279 update comfyui
I see new flux goodies on reddit, I run to nerdy rodent's channel. Thank you.
Use image composite to composite the output with the original for the areas that are supposed not to change.
It sounded like you said "Jeweled Clip Loader" and that's what I'm going to call it from now on. 1:44
Hopefully this means we're closer to a Fooocus Flux Edition?
no for Forge UI yet?
think to update comfyUI before using all of this (you probably said it in the video but I was too hurry to try ^^ ) or you will have some errors (or the loras will do nothing)
Is there a way to do instructpix2pix with Flux ? Where is the model?
Great. How to get Force/Set CLIP Device node?
If u have one gpu, u don't need that. It can brake some nodes.
It’s from extra models. Really handy for low VRAM cards!
Hopefully someone can make it work on 12GB GPUs *fingers crossed*
Спасибо за ноду Crop And Stich! Вы мне очень помогли!
The Fill model tbh is a bit of a dud.
You get very similar results just using diff diff and the base flux model, i assume that is why they dont compare to that.
Kinda silly not to compare to your base model doing inpainting, they clearly not 'happy' that their fill model is most likely only a tiny bit better than the base model.
first
where did you get sigclip_vision_patch14_384.safetensors frrom?
There's a link to it on the ComfyUI_examples page in the "Redux" section. It's in the Comfy-Org hugging face page in the sigclip_vision_384 repository.