Oh my golly, FINALLY a real teacher who actually EXPLAINS what is happening behind the scenes. Liked, subbed, and loved. I’m soooo tired of the millions of bs tuts out there that tell you nothing. Thanks a ton!!!
Sincerely telling i follow each of your videos, your artistic approach amazes me all the time. We are so lucky here that we have you. Your open source your knowledge is amazing.
Congrats on your new job! I have been using Photoshop for 20 years, so I am looking to learn Flux also to expand my art techniques. Thank you for the tutorials!
thank you! while PS is great for the ease of use, I think that creating automated pipelines in comfy is better over large volumes that always need the same logic applied
"Load & Resize Image" from KJnodes does loading, resizing/scaling (with multiple). It can replace your complete Input-group 😊Thanks for another great video
Congratulations on the new job..! Hope they appreciate your knowledge... thanks for the workflow, looks like it's time to finally download that fat union-CN model... my SSD is crying...
Thank you! As suggested by another comment, you could use the Alimama inpainting ControlNet for flux, but it works differently and it’s not as “catch all” as depth or other controlnets in my testings.
Great stuff and a great explanation! I normally use the "prepare image for inpaint" to crop it and then the "overlay" node to stitch it back but I noticed that it keeps the original image proportions for the bounding box losing resolution. It doesn't look like it is the case here so I will probably update my workflows with this =) Thanks!
dude, thanks for these videos! Really helped! Do you have any idea how I could change the view outside a window? I would like to keep the window and everything around it the same - just change the view... any idea?
if you can create a mask in something like Photoshop, you can import the mask separately. as long as it lines up with its image, you can inpaint over a separately loaded mask instead of drawing one in the open in mask editor window. create a mask only inside of the window, and after loading image and mask, you would need to adjust the controlnets' strength to taste, and inpaint only inside the window.
@@risunobushi_ai hey, thanks for the quick reply! The thing is that I'm programming a web app which needs to do all this automatically. I was hoping for there to be a ready to use model on replicate but it looks like I'll need to create a custom model for this :D
Thanks again, I'm returning to your video again. I have a question please. What setting do I change in the lower groups (flux and sdxl) that makes the generated preview/save image identical in size to the one I loaded and masked in the Input group. Thank you!!!!!!
I use the Apply Advanced Controlnet node in ComfyUI-Advanced-ControlNet by Kosinkadink, as that has an optional mask to control which regions are influenced by the depth map conditioning. In your example of inpainting large flowers over small ones, I would provide the inverted inpainting mask as an input mask to the Apply Advanced Controlnet node. The effect is that the masked conditioning helps the inference understand the context around the target inpaint area, but ignores the existing content inside the area.
I debated showing them as well, but ultimately I decided against it because: - they’re not as straightforward to understand in terms of how they work (with depth it’s much easier to understand from the preprocessed image) - they’re not always as good as a custom ControlNet setup (for example, I had mixed results using them with face Loras / garment Loras combos) - they’re not always available for all models, or they might not be as quick in being released, so it wouldn’t have been a “catch-all”, easy solution But yeah, they’re a valid alternative depending on the usecase
Another great workflow by the looks of it! Although I get aksampler error - 'mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)'? Is this due to the original picture size or something being wrong with the Math1 and 2 nodes?
this is the error you get when you're trying to use a controlnet for a different model than it was designed for - so a SDXL controlnet with a FLUX model for example
I have seen some inpaint controlnets, like the alimama inpaint alpha (now beta) for flux. Any idea on how they should be implemented? Is it an alternative to the inpaintmodelconditioning node?
hi! Alimama's inpainting controlnet, AFAIK, doesn't need a preprocessor, and in my testing the higher the strength is, the more it forces the inpainting over the original image. but then again, I'm not an expert on inpaint controlnets, mainly because I find them too specific to what they were trained for, and I'd rather use less tools that are more suited to general use
Hi Andrea, Comfy is really great for flexibility between Lora and model, but for inpaint I prefer to use Invoke AI (UI local), have you try it ? Thxs for your work 🙌
I don’t use Invoke in my stack, mostly because the clients I work for like to implement comfy rather than anything else, or straight up use the API versions of the json files
Hey I like the idea i got a problem with it though, only 1 out of 3 seeds gives something that i asked for in either sdxl and flux dunno how this is a thing maybe models? but flux gives me really random results, also i was trying to implement the new daemon detailer node with a custom advanced sampler which also didnt really inpaint as wanted, is there a way to implement the sampler as an extra node in the standard ksampler used in ur workflow?
Did you test it before using detailer daemon or did you straight up used it alongside it? I haven’t tested detailer daemon yet, and AKAIK it works by using model shifts, and that’s a much more invasive approach than usual - so I wouldn’t trust it to be working properly with this kind of pipeline straight out of the box
@@risunobushi_ai I did run the workflow as is with flux dev and an inpaint model on the sdxl side, i wanted to inapint red points on the cap of a person, idk if thats a difficult task however both sides do whatever with the instruction, black logos or nothing at all, its kinda weird. omnigen was somewhat able to achieve it but after some trys it seems to me that in ur workflow the sampler just doesnt care about the text. mb its just me though...so its not a daemon detailer problem it seems
yes you can, and usually you can apply is wherever, before or after differential diffusion. the only times I've had issues with the placing of differential diffusion was with specific versions of comfy while using ipadapter advanced, in which case differential should be either before or after the ipadapter, I don't remember which
hi! in order to inpaint, you can either: - input an image, and open it with the mask editor (right click on the image), then draw your mask, like in this video or - input an image, and input a custom mask (in this case you'd need to rewire the mask pipeline to account for that)
Question: First, thank you for your video and your explanations. In Photoshop, if I have an arm or something else too many: I select and click on generation without doing anything else. In Flux ComfyUi, I am confused. I am a beginner and I would have liked to be able to select the part to delete like in PS but I am not sure I understood that it is possible via your video (I have problems understanding, so it does not come from you ^^; ). Thanks again for your work, it helps me a lot.
Hi! In your specific case, you’d want to use a very low ControlNet strength, because you don’t want to follow the underlying picture too much - otherwise, if you did the opposite, you would always get something following the depth of the extra arm. It’s possible, it just takes a bit of time adjusting to it!
hi, please tell me how can I make vae visible, I downloaded the file. safetensirs and placed it in the models/vae folder, but the node still doesn't see it
hi! did you refresh comfy after placing the models? you can invert masks by using a invert mask node, or by using the grow mask with blur "inverted mask" output
Hi, sorry but I'm super new at this. I'm getting missing node errors: --- Missing Node Types When loading the graph, the following node types were not found UnetLoaderGGUF GetImageSize+ DepthAnythingV2Preprocessor SimpleMath+ ImageResize+ GrowMaskWithBlur --- Do I have to install some extensions to get these nodes to work?
hi! you need to go into the manager (if you don't have it installed, get it from here: github.com/ltdrdata/ComfyUI-Manager ) and install the missing custom nodes. once that's done, you should install any model that you're missing, so for example, in the GGUF node you'll be missing a quantized version of flux dev, found here: huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf usually if you load a workflow, look up the missing models in google, and look at their docs, you should be able to find them and placing them where they should be
Hello, I'm encountering the following error right at the beginning: Prompt outputs failed validation SimpleMath+: - Return type mismatch between linked nodes: a, INT != INT,FLOAT SimpleMath+: - Return type mismatch between linked nodes: a, INT != INT,FLOAT Can you give me a tip on how to fix this?
Sorry, one more noobie question: I've downloaded Depth anything v2, but it keeps giving me this error even though I have a file in that folder: [Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\depth-anything\\Depth-Anything-V2-Large\\.cache\\huggingface\\download\\depth_anything_v2_vitl.pth.a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345.incomplete'
it looks like the node can't properly download the depth anything v2 model in its folder. try selecting a different depth anything model in the dropdown menu, like the s version, or you can change preprocessor to another depth estimator model (like midas, marigold, zoe, etc)
I'm back, sorry for the wait!
hi
Oh my golly, FINALLY a real teacher who actually EXPLAINS what is happening behind the scenes. Liked, subbed, and loved. I’m soooo tired of the millions of bs tuts out there that tell you nothing.
Thanks a ton!!!
Sincerely telling i follow each of your videos, your artistic approach amazes me all the time. We are so lucky here that we have you. Your open source your knowledge is amazing.
thank you for the kind words!
learned a lot! Thanks master.
Very good tutorial, thanks for explaining the specific nodes.
Welcome back! Congratulations on the new job.
thank you!
Hi! I really like your creative and meticulous workflow and your attitude towards licensing. Glad to see you back in action.
thank you!
Thank you always for sharing your insight and everyone is congratulating you for your new job so congratulations as well!
We missed you! Thanks for another beautifully informative tutorial, and congratulations on your new position! They're lucky to have you :)
thank you! I missed doing videos too
Congrats on the new job!
Thanks!
Really nice workflow and explanation, thanks :)
Congrats on your new job! I have been using Photoshop for 20 years, so I am looking to learn Flux also to expand my art techniques. Thank you for the tutorials!
thank you! while PS is great for the ease of use, I think that creating automated pipelines in comfy is better over large volumes that always need the same logic applied
Really nice detailed overview and clearly explained, thanks :)
Look for the Inpaint crop and stitch nodes. They do the same thing as your advanced workflow, but much easier.
Exactly what I was looking for. Its like you read my mind.
"Load & Resize Image" from KJnodes does loading, resizing/scaling (with multiple). It can replace your complete Input-group 😊Thanks for another great video
Great work!
Congratulations on the new job..! Hope they appreciate your knowledge... thanks for the workflow, looks like it's time to finally download that fat union-CN model... my SSD is crying...
Thank you! As suggested by another comment, you could use the Alimama inpainting ControlNet for flux, but it works differently and it’s not as “catch all” as depth or other controlnets in my testings.
Great stuff and a great explanation! I normally use the "prepare image for inpaint" to crop it and then the "overlay" node to stitch it back but I noticed that it keeps the original image proportions for the bounding box losing resolution. It doesn't look like it is the case here so I will probably update my workflows with this =) Thanks!
IC light Ver 2 is out. Can not wait for your next video.
dude, thanks for these videos! Really helped!
Do you have any idea how I could change the view outside a window? I would like to keep the window and everything around it the same - just change the view... any idea?
if you can create a mask in something like Photoshop, you can import the mask separately. as long as it lines up with its image, you can inpaint over a separately loaded mask instead of drawing one in the open in mask editor window.
create a mask only inside of the window, and after loading image and mask, you would need to adjust the controlnets' strength to taste, and inpaint only inside the window.
@@risunobushi_ai hey, thanks for the quick reply! The thing is that I'm programming a web app which needs to do all this automatically. I was hoping for there to be a ready to use model on replicate but it looks like I'll need to create a custom model for this :D
Thanks again, I'm returning to your video again. I have a question please. What setting do I change in the lower groups (flux and sdxl) that makes the generated preview/save image identical in size to the one I loaded and masked in the Input group. Thank you!!!!!!
I needed this
great project
Hey, nice video. Have checked if they aspect ratio of the extracted area is influencing quality? And have you tested the workflow with 3.5?
I use the Apply Advanced Controlnet node in ComfyUI-Advanced-ControlNet by Kosinkadink, as that has an optional mask to control which regions are influenced by the depth map conditioning. In your example of inpainting large flowers over small ones, I would provide the inverted inpainting mask as an input mask to the Apply Advanced Controlnet node. The effect is that the masked conditioning helps the inference understand the context around the target inpaint area, but ignores the existing content inside the area.
Very information as usual!
I'm wondering, why not use directly inpainting controlnets like the one from alimama ?
I debated showing them as well, but ultimately I decided against it because:
- they’re not as straightforward to understand in terms of how they work (with depth it’s much easier to understand from the preprocessed image)
- they’re not always as good as a custom ControlNet setup (for example, I had mixed results using them with face Loras / garment Loras combos)
- they’re not always available for all models, or they might not be as quick in being released, so it wouldn’t have been a “catch-all”, easy solution
But yeah, they’re a valid alternative depending on the usecase
Another great workflow by the looks of it! Although I get aksampler error - 'mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)'? Is this due to the original picture size or something being wrong with the Math1 and 2 nodes?
this is the error you get when you're trying to use a controlnet for a different model than it was designed for - so a SDXL controlnet with a FLUX model for example
thanks for the great info
I have seen some inpaint controlnets, like the alimama inpaint alpha (now beta) for flux. Any idea on how they should be implemented? Is it an alternative to the inpaintmodelconditioning node?
hi! Alimama's inpainting controlnet, AFAIK, doesn't need a preprocessor, and in my testing the higher the strength is, the more it forces the inpainting over the original image. but then again, I'm not an expert on inpaint controlnets, mainly because I find them too specific to what they were trained for, and I'd rather use less tools that are more suited to general use
KSampler freezes and reports an error.... any known solution for that?
Hi Andrea, Comfy is really great for flexibility between Lora and model, but for inpaint I prefer to use Invoke AI (UI local), have you try it ? Thxs for your work 🙌
I don’t use Invoke in my stack, mostly because the clients I work for like to implement comfy rather than anything else, or straight up use the API versions of the json files
Hey I like the idea i got a problem with it though, only 1 out of 3 seeds gives something that i asked for in either sdxl and flux dunno how this is a thing maybe models? but flux gives me really random results, also i was trying to implement the new daemon detailer node with a custom advanced sampler which also didnt really inpaint as wanted, is there a way to implement the sampler as an extra node in the standard ksampler used in ur workflow?
Did you test it before using detailer daemon or did you straight up used it alongside it? I haven’t tested detailer daemon yet, and AKAIK it works by using model shifts, and that’s a much more invasive approach than usual - so I wouldn’t trust it to be working properly with this kind of pipeline straight out of the box
@@risunobushi_ai I did run the workflow as is with flux dev and an inpaint model on the sdxl side, i wanted to inapint red points on the cap of a person, idk if thats a difficult task however both sides do whatever with the instruction, black logos or nothing at all, its kinda weird. omnigen was somewhat able to achieve it but after some trys it seems to me that in ur workflow the sampler just doesnt care about the text. mb its just me though...so its not a daemon detailer problem it seems
can we add lora to speedup process? lora turbo to make it 8 step? where to place it? before Differential Diffusion node or after?
yes you can, and usually you can apply is wherever, before or after differential diffusion. the only times I've had issues with the placing of differential diffusion was with specific versions of comfy while using ipadapter advanced, in which case differential should be either before or after the ipadapter, I don't remember which
I'm a bit lost about Inpainting itself - Do you just paint any area on the image with solid color like black and then open in in comfy?
hi! in order to inpaint, you can either:
- input an image, and open it with the mask editor (right click on the image), then draw your mask, like in this video or
- input an image, and input a custom mask (in this case you'd need to rewire the mask pipeline to account for that)
Yeah!
Question: First, thank you for your video and your explanations.
In Photoshop, if I have an arm or something else too many: I select and click on generation without doing anything else.
In Flux ComfyUi, I am confused. I am a beginner and I would have liked to be able to select the part to delete like in PS but I am not sure I understood that it is possible via your video (I have problems understanding, so it does not come from you ^^; ).
Thanks again for your work, it helps me a lot.
Hi! In your specific case, you’d want to use a very low ControlNet strength, because you don’t want to follow the underlying picture too much - otherwise, if you did the opposite, you would always get something following the depth of the extra arm.
It’s possible, it just takes a bit of time adjusting to it!
hi, please tell me how can I make vae visible, I downloaded the file. safetensirs and placed it in the models/vae folder, but the node still doesn't see it
and can i invert the mask and replace the background in a full length portrait shot?
hi! did you refresh comfy after placing the models?
you can invert masks by using a invert mask node, or by using the grow mask with blur "inverted mask" output
Hi, sorry but I'm super new at this. I'm getting missing node errors:
---
Missing Node Types
When loading the graph, the following node types were not found
UnetLoaderGGUF
GetImageSize+
DepthAnythingV2Preprocessor
SimpleMath+
ImageResize+
GrowMaskWithBlur
---
Do I have to install some extensions to get these nodes to work?
hi! you need to go into the manager (if you don't have it installed, get it from here: github.com/ltdrdata/ComfyUI-Manager ) and install the missing custom nodes. once that's done, you should install any model that you're missing, so for example, in the GGUF node you'll be missing a quantized version of flux dev, found here: huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf
usually if you load a workflow, look up the missing models in google, and look at their docs, you should be able to find them and placing them where they should be
I found the extension needed for each node type:
UnetLoaderGGUF - ComfyUI-GGUF
GetImageSize+, ImageResize+ - Image Resize for ComfyUI
DepthAnythingV2Preprocessor - ComfyUI's ControlNet Auxiliary Preprocessors
SimpleMath+ - SimpleMath
GrowMaskWithBlur - ComfyUI-KJNodes
Hello, I'm encountering the following error right at the beginning:
Prompt outputs failed validation
SimpleMath+:
- Return type mismatch between linked nodes: a, INT != INT,FLOAT
SimpleMath+:
- Return type mismatch between linked nodes: a, INT != INT,FLOAT
Can you give me a tip on how to fix this?
I solved it by simply using the Utils Math Expression node instead.
👍👍👍
Do text. I don't mean on a sign I mean Image 1. Text "Hello" and out comes Image 1 with the "Hello" text that FLUX created overlayed.
but it's not a full-blown inpaint, it's just replacing one thing with another, it's a very simple thing....
Wdym, can you please explain. I am a noob,sorry
Sorry, one more noobie question: I've downloaded Depth anything v2, but it keeps giving me this error even though I have a file in that folder: [Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\depth-anything\\Depth-Anything-V2-Large\\.cache\\huggingface\\download\\depth_anything_v2_vitl.pth.a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345.incomplete'
it looks like the node can't properly download the depth anything v2 model in its folder. try selecting a different depth anything model in the dropdown menu, like the s version, or you can change preprocessor to another depth estimator model (like midas, marigold, zoe, etc)