UPDATE - NEW Flux-Fill-Model for Model-Conditioning Inpainting workflow (see related chapter in the video) is available here: huggingface.co/black-forest-labs/FLUX.1-Fill-dev/tree/main (put it in your ComfyUI\models\diffusion_models and use the updated workflow from my patreon) and GGUF models here: huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/tree/main (put in your ComfyUI\models\unet). Most likely you need to update your ComfyUI. -- UPDATE - the ControlNet-Inpaint-Beta-Model is available (use instead of Alpha): huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta/tree/main -- Which FLUX Inpainting workflow do you prefer?
for the new Flux-Fill-Model :Error occurred when executing UNETLoader: Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
Thanks a lot! I've already updated the description for this video and added a new workflow for Flux Fill. Regarding Depth and Canny I'm not sure as we already have several good solutions, including Union Pro for Flux, which I've covered in the Flux ControlNet video. I'm very keen on the new Redux model, but it doesn't seem to work the way I have hoped. Anyhow, that's currently the best candidate for a video about the Flux tools.
Great tutorial! Thank you so much for sharing this valuable knowledge, and it's truly admirable that you're offering these workflows for free. It really makes a difference!
Thank you for your great effort to share with the community for free! I greatly appreciate it. But as a suggestion, please do not include any music in between nodes connecting, it is really breaking someone's attention. its not a documentary, not a music video clip, just a tutorial who the people focus to learn something. Thank you again, and keep up the good work
Thank you very much I was thinking about comparing different Inpainting technique, your video is just what I need. What do you think about cropping the inpainting part then upscale it seperately then inpaint then stitch it back? There is a node Crop&Stitch for that or we can do it manually but I'm not sure if those could work with your ControlNet workflow.
Thanks a lot for your feedback. Interesting, I didn't know these two noes. Looks like by using them we can get something similar to 'masked only' in A1111. I don't think you need ControlNet for this. Not sure regarding upscaling, but usually it's a good idea to do at least a 2x after inpainting to blur the contours.
Thank you for the nice video! I was wondering of your thoughts on applying this for background generation. Do you think the inpainting can be done with the consideration of the non-masked region?
Thanks for your feedback! Yes, that's possible. In the 'grow mask with blur'-node you have to toggle 'flip_input' to 'true', then the non-masked region is newly generated with the prompt.
Hi, as of recently, it is now possible to use AMD GPU PyTORCH ROCm natively under Windows via WSL2, could you do performance tests in ComfyUI comparing this native solution versus ZLUDA translator?
@@raven1439 That's not the problem. I tried it a while ago, the GPU call just got stuck. I then read about several cases that the 6800 is not (yet?) supported by the driver.
@@NextTechandAI I mean, I have spent over 2 days trying to get it to work and it will not. I have gone through various YT creator workflows and forget it. Ironically, I actually had XL almost do it, while the flux one next to it (from a creator) could not. Dev. I even tried your workflow to no avail.
@@generalawareness101 I still don't know what exactly didn't work for you, but in general you have to give the text enough space. Similar to finger inpainting, the new area to be inpainted needs to be large enough to actually accommodate 5 fingers.
UPDATE - NEW Flux-Fill-Model for Model-Conditioning Inpainting workflow (see related chapter in the video) is available here: huggingface.co/black-forest-labs/FLUX.1-Fill-dev/tree/main (put it in your ComfyUI\models\diffusion_models and use the updated workflow from my patreon) and GGUF models here: huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/tree/main (put in your ComfyUI\models\unet). Most likely you need to update your ComfyUI.
--
UPDATE - the ControlNet-Inpaint-Beta-Model is available (use instead of Alpha): huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta/tree/main
--
Which FLUX Inpainting workflow do you prefer?
for the new Flux-Fill-Model :Error occurred when executing UNETLoader:
Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
@anqipang6970 Please use the new workflow from my patreon, using the UNETLoader is wrong. Moreover most likely you need to update your ComfyUI.
Great vid! Are you planning to make one with the new Flux tools as well?
Thanks a lot!
I've already updated the description for this video and added a new workflow for Flux Fill. Regarding Depth and Canny I'm not sure as we already have several good solutions, including Union Pro for Flux, which I've covered in the Flux ControlNet video. I'm very keen on the new Redux model, but it doesn't seem to work the way I have hoped. Anyhow, that's currently the best candidate for a video about the Flux tools.
Great tutorial! Thank you so much for sharing this valuable knowledge, and it's truly admirable that you're offering these workflows for free. It really makes a difference!
Thank you for the recognition and the very motivating feedback!
Thank you for your great effort to share with the community for free! I greatly appreciate it. But as a suggestion, please do not include any music in between nodes connecting, it is really breaking someone's attention. its not a documentary, not a music video clip, just a tutorial who the people focus to learn something. Thank you again, and keep up the good work
Thank you for your honest feedback. I'll take this into account in the next videos.
Wow, fantastic job in trying all these methods !!! Thank you very much and congratulations !!
Such a motivating feedback - thanks a lot!
Great tutorial, thanks, how we can use inpaint to use two Loras for different characters.
Thanks a lot. First inpaint the left character, in a second step inpaint the right one.
Thank you very much I was thinking about comparing different Inpainting technique, your video is just what I need. What do you think about cropping the inpainting part then upscale it seperately then inpaint then stitch it back? There is a node Crop&Stitch for that or we can do it manually but I'm not sure if those could work with your ControlNet workflow.
Thanks a lot for your feedback.
Interesting, I didn't know these two noes. Looks like by using them we can get something similar to 'masked only' in A1111.
I don't think you need ControlNet for this. Not sure regarding upscaling, but usually it's a good idea to do at least a 2x after inpainting to blur the contours.
Thank you for the nice video! I was wondering of your thoughts on applying this for background generation. Do you think the inpainting can be done with the consideration of the non-masked region?
Thanks for your feedback! Yes, that's possible. In the 'grow mask with blur'-node you have to toggle 'flip_input' to 'true', then the non-masked region is newly generated with the prompt.
Excellent summarization of all the options. Your are one of the few who actually showed real results. You earned one sub.
Thank you very much for your feedback and the sub!
Can't make it change my images IDK what am I doing wrong I use the exact workflow
Thx for video! Is diff. diffusion works only with inpaint? Will it somehow impact image being used with img2img?
Thanks for you feedback! I have only used differential diffusion for inpainting and only know it in this environment.
Hi, as of recently, it is now possible to use AMD GPU PyTORCH ROCm natively under Windows via WSL2, could you do performance tests in ComfyUI comparing this native solution versus ZLUDA translator?
I would like to, but AMD behaves like AMD again: only the 7000 series is supported.
@@NextTechandAI This method doesn't work for you? HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py
@@raven1439 That's not the problem. I tried it a while ago, the GPU call just got stuck. I then read about several cases that the 6800 is not (yet?) supported by the driver.
🙏
Text eludes me with inpainting in Flux.
What do you mean?
@@NextTechandAI I mean, I have spent over 2 days trying to get it to work and it will not. I have gone through various YT creator workflows and forget it. Ironically, I actually had XL almost do it, while the flux one next to it (from a creator) could not. Dev. I even tried your workflow to no avail.
@@generalawareness101 I still don't know what exactly didn't work for you, but in general you have to give the text enough space. Similar to finger inpainting, the new area to be inpainted needs to be large enough to actually accommodate 5 fingers.
@@NextTechandAI I gave it 1/4, 1/2, 3/4 of the images. I tried everything.