Flux is one of those model variants where an image can turn out to be worthless by a typo in the prompt or even the choice of sampler node and how it implements noise since in some sampler nodes there is only random noise but a custom sampler with custom noise has a list of noise_types from fractal to hires-pyramid-near-exact and model can sometimes generate blanks with random noise but do work with a special noise_type, you could also use stuff like perlin latent noise nodes or empty noisy latent which differs from empty latents to start with. but some models are highly versatile as well and can work with any sampler/scheduler/noise_type you throw at it and still give good results but to make matters worse this can differ from version of dependencies liike pytorch where updating it can just cause a model to fail or another version could half the generation time or double it. the way to prompt this model is also highly interesting to say at least. i usual use prompt generators like autoprompt or onebuttonprompt or creaprompt + char prompt generation but due the text encoding being sometime the longest step in the flow it can be problamatic to wait for 01:30 for the text encoding to then generate an image in like 10 seconds making it hard to test for random prompts to test it capabilities. and sometimes a wrong word in the prompt can realy mess up the whole generation for example prompt builder inspire has accurate anatomy and adding that to a prompt can give weird results even with sdxl models. it doesnt take negatives so either in a sampler node that requires this input you give it a blank but that doubles the encoding times. for flux i had good results with prompt > Text to Conditioning > BasicGuider without setting a cfg sometimes you need to add a cdg wiith like fluxguidance. the whole generation process is a lot more complex and sensitive than sdxl where most models work with a dpm/dpmpp/dpmpp_sde/dpmpp_2m/dpmpp_sde_2m variant and karras scheduler but flux can and will fail if you use the wrong loader to begin with
Yes, this is a very true and a great response! There's a lot of variation, and that's why figuring out the best options for a particular model is so important. Thank you again for this great feedback!
@@GrocksterRox Yes, but it was about time that more people mention how FLUX is a real mimosa or let's call it the JENGA tower of AI image generation models: touch it at the wrong spot, and the quality crumbles or you won't be able to create consistent results. Fortunately I am not to much into photorealism, so FLUX in most cases is not my first choice... Not to speak about it taking to long, even on my 4090
Very nice, thanks Grockster! and thanks to the Comfy developers for making that quick convert to input from widget functionality, that will save a lot of clicking!
@@GrocksterRox I have just watched ALL and YOU rocks! I am new in ComfyUI and all other tutorial videos are soo complicated but you talent for explaining stuff is awesome.. Why? Because I am NOT native english (polish) and your pronunciation is PERFECT and slow
@bartosak that makes me so happy have these videos are helpful for you! Thank you so much for being a subscriber and feel free to convince others to subscribe as well.
Thanks Grockster for the video. Helps me a lot. I followed your instructions to do face swap using Detailer and a custom LORA but I failed to get the custom LORA face to replace the original. End up getting the face of the initial input image. Any tips on this will be greatly appreciated
It's tough to diagnose with that description. Generally you want to make sure you have a high enough denoise to ensure the Detailer is given enough leeway to replace the face, but again hard to know unless I see it. Feel free to jump on the discord and we can chat. Good luck!
Thank you for all the work. I am struggling with the part where you draw fingers: I'm copying your steps exactly, but always getting this error: "Loop (22,21,15) - not submitting workflow". Anyone has a clue what's going on?
Definitely! Typically it's because you're looking from a preview bridge (which is being fed from your sampler) and you're looping that back into the same sampler. This causes an infinite loop that never ends, so Comfy has a protective control to error/prevent it. I would make sure you have a separate load image where you're doing the painting tweaks and then pipe that into a VAE Encode and then that into the sampler. Might be easier to explain online on discord if interested- discord.gg/SCK4MEZ3ph
Fantastic video as always! So much great info, I actually watched it twice so I could take in as much as possible! Thank you for sharing! Jason
Thank you so much! Makes my day when people get a lot of value out of the videos.
Flux is one of those model variants where an image can turn out to be worthless by a typo in the prompt or even the choice of sampler node and how it implements noise since in some sampler nodes there is only random noise but a custom sampler with custom noise has a list of noise_types from fractal to hires-pyramid-near-exact and model can sometimes generate blanks with random noise but do work with a special noise_type, you could also use stuff like perlin latent noise nodes or empty noisy latent which differs from empty latents to start with.
but some models are highly versatile as well and can work with any sampler/scheduler/noise_type you throw at it and still give good results but to make matters worse this can differ from version of dependencies liike pytorch where updating it can just cause a model to fail or another version could half the generation time or double it.
the way to prompt this model is also highly interesting to say at least. i usual use prompt generators like autoprompt or onebuttonprompt or creaprompt + char prompt generation but due the text encoding being sometime the longest step in the flow it can be problamatic to wait for 01:30 for the text encoding to then generate an image in like 10 seconds making it hard to test for random prompts to test it capabilities. and sometimes a wrong word in the prompt can realy mess up the whole generation for example prompt builder inspire has accurate anatomy and adding that to a prompt can give weird results even with sdxl models. it doesnt take negatives so either in a sampler node that requires this input you give it a blank but that doubles the encoding times. for flux i had good results with prompt > Text to Conditioning > BasicGuider without setting a cfg
sometimes you need to add a cdg wiith like fluxguidance.
the whole generation process is a lot more complex and sensitive than sdxl where most models work with a dpm/dpmpp/dpmpp_sde/dpmpp_2m/dpmpp_sde_2m variant and karras scheduler but flux can and will fail if you use the wrong loader to begin with
Yes, this is a very true and a great response! There's a lot of variation, and that's why figuring out the best options for a particular model is so important. Thank you again for this great feedback!
@@GrocksterRox
Yes, but it was about time that more people mention how FLUX is a real mimosa or let's call it the JENGA tower of AI image generation models: touch it at the wrong spot, and the quality crumbles or you won't be able to create consistent results. Fortunately I am not to much into photorealism, so FLUX in most cases is not my first choice... Not to speak about it taking to long, even on my 4090
@joechip4822 excellent perspective, and I completely understand about how it can become fragile based on particular settings. Well done!
PulID and the Installation-Experience FROM HELL 😄
Yup 100 percent agreed!
So much great info packed into one video! Thank you!!
I'm so glad it was helpful, every new tip and trick that can help someone is a bonus "good deeds" point :) Thanks for sharing with others!
Thank you for this flows and AWESOME inventory! - so let me buy you a coffee ☕
You rock, thank you so much!
@@GrocksterRox I have just looked at your price list for individual training.. and I'm kinda embarrassed for this tip...
Please don't be, every bit helps, it's very much appreciated!
@@GrocksterRox
Very nice, thanks Grockster! and thanks to the Comfy developers for making that quick convert to input from widget functionality, that will save a lot of clicking!
Absolutely and though I can't take credit for the Comfy Devs, I'm glad they're actively making the environment easier!
Wow, those group nodes customizations are neat!
For sure, thanks so much for the comment and for sharing the video with others.
I have just started playing with face swap and found this! WOW - Thanks !
Awesome, I'm so glad! I hope this helps you and feel free to share this education with others.
@@GrocksterRox I have just watched ALL and YOU rocks! I am new in ComfyUI and all other tutorial videos are soo complicated but you talent for explaining stuff is awesome.. Why? Because I am NOT native english (polish) and your pronunciation is PERFECT and slow
@bartosak that makes me so happy have these videos are helpful for you! Thank you so much for being a subscriber and feel free to convince others to subscribe as well.
It helps a lot Thank You!♥️
Enjoy! I love this new technique!
WoW..cool! Great
So glad you'll enjoy, have fun with it!
Thanks for the video about segments!! I'll try it out right away!!
It's really amazing, enjoy!
Thank You
Always welcome!
Thanks Grockster for the video. Helps me a lot. I followed your instructions to do face swap using Detailer and a custom LORA but I failed to get the custom LORA face to replace the original. End up getting the face of the initial input image. Any tips on this will be greatly appreciated
It's tough to diagnose with that description. Generally you want to make sure you have a high enough denoise to ensure the Detailer is given enough leeway to replace the face, but again hard to know unless I see it. Feel free to jump on the discord and we can chat. Good luck!
i like the pro tip! thank you! ;)
Glad it was helpful!
Nice, thanks! Also, what are you using to animate the knight with your voice?
Thank you so much! I usually change between Live Portrait, Hedra and other video blends to try to get just the right balance of fun and realism :)
Thank you for all the work. I am struggling with the part where you draw fingers: I'm copying your steps exactly, but always getting this error: "Loop (22,21,15) - not submitting workflow". Anyone has a clue what's going on?
Definitely! Typically it's because you're looking from a preview bridge (which is being fed from your sampler) and you're looping that back into the same sampler. This causes an infinite loop that never ends, so Comfy has a protective control to error/prevent it. I would make sure you have a separate load image where you're doing the painting tweaks and then pipe that into a VAE Encode and then that into the sampler. Might be easier to explain online on discord if interested- discord.gg/SCK4MEZ3ph
how to swap face using image we have?
If you're in Flux, if you have similar faces that you already have, you can train your own LORA and then use the method in this video. Good luck!
What the heck.....why is the author using an avatar in FULL PLATE ARMOR... haha
All for fun and satire :)