Easy Inpainting for ANY model (SDXL, Flux, etc)

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 พ.ย. 2024

ความคิดเห็น • 62

  • @risunobushi_ai
    @risunobushi_ai  หลายเดือนก่อน +14

    I'm back, sorry for the wait!

  • @1lllllllll1
    @1lllllllll1 11 วันที่ผ่านมา +2

    Oh my golly, FINALLY a real teacher who actually EXPLAINS what is happening behind the scenes. Liked, subbed, and loved. I’m soooo tired of the millions of bs tuts out there that tell you nothing.
    Thanks a ton!!!

  • @abaj006
    @abaj006 13 วันที่ผ่านมา

    Very good tutorial, thanks for explaining the specific nodes.

  • @baheth3elmy16
    @baheth3elmy16 หลายเดือนก่อน +3

    Welcome back! Congratulations on the new job.

  • @Mranshumansinghr
    @Mranshumansinghr 29 วันที่ผ่านมา

    Exactly what I was looking for. Its like you read my mind.

  • @bregsma
    @bregsma 29 วันที่ผ่านมา

    Thank you always for sharing your insight and everyone is congratulating you for your new job so congratulations as well!

  • @zerobase9858
    @zerobase9858 29 วันที่ผ่านมา +1

    Hi! I really like your creative and meticulous workflow and your attitude towards licensing. Glad to see you back in action.

  • @JohanAlfort
    @JohanAlfort 21 วันที่ผ่านมา

    Really nice workflow and explanation, thanks :)

  • @runebinder
    @runebinder 28 วันที่ผ่านมา

    Really nice detailed overview and clearly explained, thanks :)

  • @antiquesandroses
    @antiquesandroses 29 วันที่ผ่านมา +1

    Congrats on your new job! I have been using Photoshop for 20 years, so I am looking to learn Flux also to expand my art techniques. Thank you for the tutorials!

    • @risunobushi_ai
      @risunobushi_ai  29 วันที่ผ่านมา

      thank you! while PS is great for the ease of use, I think that creating automated pipelines in comfy is better over large volumes that always need the same logic applied

  • @JoelB71
    @JoelB71 หลายเดือนก่อน +1

    We missed you! Thanks for another beautifully informative tutorial, and congratulations on your new position! They're lucky to have you :)

    • @risunobushi_ai
      @risunobushi_ai  29 วันที่ผ่านมา

      thank you! I missed doing videos too

  • @prodmas
    @prodmas หลายเดือนก่อน +2

    Look for the Inpaint crop and stitch nodes. They do the same thing as your advanced workflow, but much easier.

  • @defidigest9
    @defidigest9 26 วันที่ผ่านมา

    I needed this

  • @DanDanTheAiMan
    @DanDanTheAiMan หลายเดือนก่อน +2

    Congrats on the new job!

  • @Mranshumansinghr
    @Mranshumansinghr 28 วันที่ผ่านมา

    IC light Ver 2 is out. Can not wait for your next video.

  • @kallamamran
    @kallamamran 28 วันที่ผ่านมา

    "Load & Resize Image" from KJnodes does loading, resizing/scaling (with multiple). It can replace your complete Input-group 😊Thanks for another great video

  • @Zampano2
    @Zampano2 หลายเดือนก่อน +1

    Congratulations on the new job..! Hope they appreciate your knowledge... thanks for the workflow, looks like it's time to finally download that fat union-CN model... my SSD is crying...

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน +1

      Thank you! As suggested by another comment, you could use the Alimama inpainting ControlNet for flux, but it works differently and it’s not as “catch all” as depth or other controlnets in my testings.

  • @Neotrixstdr
    @Neotrixstdr 29 วันที่ผ่านมา +1

    Great work!

  • @ayakakamisato-ls8nu
    @ayakakamisato-ls8nu 28 วันที่ผ่านมา

    great project

  • @ValorantNexus
    @ValorantNexus 29 วันที่ผ่านมา

    thanks for the great info

  • @serasmartagne
    @serasmartagne 29 วันที่ผ่านมา

    I use the Apply Advanced Controlnet node in ComfyUI-Advanced-ControlNet by Kosinkadink, as that has an optional mask to control which regions are influenced by the depth map conditioning. In your example of inpainting large flowers over small ones, I would provide the inverted inpainting mask as an input mask to the Apply Advanced Controlnet node. The effect is that the masked conditioning helps the inference understand the context around the target inpaint area, but ignores the existing content inside the area.

  • @baheth3elmy16
    @baheth3elmy16 วันที่ผ่านมา

    Thanks again, I'm returning to your video again. I have a question please. What setting do I change in the lower groups (flux and sdxl) that makes the generated preview/save image identical in size to the one I loaded and masked in the Input group. Thank you!!!!!!

  • @ArnaudSteinmetz
    @ArnaudSteinmetz หลายเดือนก่อน +1

    Very information as usual!
    I'm wondering, why not use directly inpainting controlnets like the one from alimama ?

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน +1

      I debated showing them as well, but ultimately I decided against it because:
      - they’re not as straightforward to understand in terms of how they work (with depth it’s much easier to understand from the preprocessed image)
      - they’re not always as good as a custom ControlNet setup (for example, I had mixed results using them with face Loras / garment Loras combos)
      - they’re not always available for all models, or they might not be as quick in being released, so it wouldn’t have been a “catch-all”, easy solution
      But yeah, they’re a valid alternative depending on the usecase

  • @d4veejones53
    @d4veejones53 28 วันที่ผ่านมา

    Another great workflow by the looks of it! Although I get aksampler error - 'mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)'? Is this due to the original picture size or something being wrong with the Math1 and 2 nodes?

    • @risunobushi_ai
      @risunobushi_ai  28 วันที่ผ่านมา

      this is the error you get when you're trying to use a controlnet for a different model than it was designed for - so a SDXL controlnet with a FLUX model for example

  • @NinoLouLeChenadec
    @NinoLouLeChenadec 29 วันที่ผ่านมา

    Hi Andrea, Comfy is really great for flexibility between Lora and model, but for inpaint I prefer to use Invoke AI (UI local), have you try it ? Thxs for your work 🙌

    • @risunobushi_ai
      @risunobushi_ai  26 วันที่ผ่านมา +1

      I don’t use Invoke in my stack, mostly because the clients I work for like to implement comfy rather than anything else, or straight up use the API versions of the json files

  • @DarioToledo
    @DarioToledo 29 วันที่ผ่านมา +1

    I have seen some inpaint controlnets, like the alimama inpaint alpha (now beta) for flux. Any idea on how they should be implemented? Is it an alternative to the inpaintmodelconditioning node?

    • @risunobushi_ai
      @risunobushi_ai  29 วันที่ผ่านมา

      hi! Alimama's inpainting controlnet, AFAIK, doesn't need a preprocessor, and in my testing the higher the strength is, the more it forces the inpainting over the original image. but then again, I'm not an expert on inpaint controlnets, mainly because I find them too specific to what they were trained for, and I'd rather use less tools that are more suited to general use

  • @erikdias9604
    @erikdias9604 หลายเดือนก่อน

    Question: First, thank you for your video and your explanations.
    In Photoshop, if I have an arm or something else too many: I select and click on generation without doing anything else.
    In Flux ComfyUi, I am confused. I am a beginner and I would have liked to be able to select the part to delete like in PS but I am not sure I understood that it is possible via your video (I have problems understanding, so it does not come from you ^^; ).
    Thanks again for your work, it helps me a lot.

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน

      Hi! In your specific case, you’d want to use a very low ControlNet strength, because you don’t want to follow the underlying picture too much - otherwise, if you did the opposite, you would always get something following the depth of the extra arm.
      It’s possible, it just takes a bit of time adjusting to it!

  • @salomahal7287
    @salomahal7287 27 วันที่ผ่านมา

    Hey I like the idea i got a problem with it though, only 1 out of 3 seeds gives something that i asked for in either sdxl and flux dunno how this is a thing maybe models? but flux gives me really random results, also i was trying to implement the new daemon detailer node with a custom advanced sampler which also didnt really inpaint as wanted, is there a way to implement the sampler as an extra node in the standard ksampler used in ur workflow?

    • @risunobushi_ai
      @risunobushi_ai  26 วันที่ผ่านมา

      Did you test it before using detailer daemon or did you straight up used it alongside it? I haven’t tested detailer daemon yet, and AKAIK it works by using model shifts, and that’s a much more invasive approach than usual - so I wouldn’t trust it to be working properly with this kind of pipeline straight out of the box

    • @salomahal7287
      @salomahal7287 24 วันที่ผ่านมา

      @@risunobushi_ai I did run the workflow as is with flux dev and an inpaint model on the sdxl side, i wanted to inapint red points on the cap of a person, idk if thats a difficult task however both sides do whatever with the instruction, black logos or nothing at all, its kinda weird. omnigen was somewhat able to achieve it but after some trys it seems to me that in ur workflow the sampler just doesnt care about the text. mb its just me though...so its not a daemon detailer problem it seems

  • @antronero5970
    @antronero5970 หลายเดือนก่อน +1

    Yeah!

  • @casperd2100
    @casperd2100 16 วันที่ผ่านมา

    Hi, sorry but I'm super new at this. I'm getting missing node errors:
    ---
    Missing Node Types
    When loading the graph, the following node types were not found
    UnetLoaderGGUF
    GetImageSize+
    DepthAnythingV2Preprocessor
    SimpleMath+
    ImageResize+
    GrowMaskWithBlur
    ---
    Do I have to install some extensions to get these nodes to work?

    • @risunobushi_ai
      @risunobushi_ai  16 วันที่ผ่านมา

      hi! you need to go into the manager (if you don't have it installed, get it from here: github.com/ltdrdata/ComfyUI-Manager ) and install the missing custom nodes. once that's done, you should install any model that you're missing, so for example, in the GGUF node you'll be missing a quantized version of flux dev, found here: huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf
      usually if you load a workflow, look up the missing models in google, and look at their docs, you should be able to find them and placing them where they should be

    • @casperd2100
      @casperd2100 16 วันที่ผ่านมา

      I found the extension needed for each node type:
      UnetLoaderGGUF - ComfyUI-GGUF
      GetImageSize+, ImageResize+ - Image Resize for ComfyUI
      DepthAnythingV2Preprocessor - ComfyUI's ControlNet Auxiliary Preprocessors
      SimpleMath+ - SimpleMath
      GrowMaskWithBlur - ComfyUI-KJNodes

  • @oonefilms
    @oonefilms 29 วันที่ผ่านมา

    I'm a bit lost about Inpainting itself - Do you just paint any area on the image with solid color like black and then open in in comfy?

    • @risunobushi_ai
      @risunobushi_ai  29 วันที่ผ่านมา +2

      hi! in order to inpaint, you can either:
      - input an image, and open it with the mask editor (right click on the image), then draw your mask, like in this video or
      - input an image, and input a custom mask (in this case you'd need to rewire the mask pipeline to account for that)

  • @FEILIU-m6c
    @FEILIU-m6c 24 วันที่ผ่านมา

    👍👍👍

  • @titanoplastik
    @titanoplastik 7 วันที่ผ่านมา

    Hello, I'm encountering the following error right at the beginning:
    Prompt outputs failed validation
    SimpleMath+:
    - Return type mismatch between linked nodes: a, INT != INT,FLOAT
    SimpleMath+:
    - Return type mismatch between linked nodes: a, INT != INT,FLOAT
    Can you give me a tip on how to fix this?

    • @titanoplastik
      @titanoplastik 7 วันที่ผ่านมา

      I solved it by simply using the Utils Math Expression node instead.

  • @panonesia
    @panonesia 29 วันที่ผ่านมา

    can we add lora to speedup process? lora turbo to make it 8 step? where to place it? before Differential Diffusion node or after?

    • @risunobushi_ai
      @risunobushi_ai  29 วันที่ผ่านมา

      yes you can, and usually you can apply is wherever, before or after differential diffusion. the only times I've had issues with the placing of differential diffusion was with specific versions of comfy while using ipadapter advanced, in which case differential should be either before or after the ipadapter, I don't remember which

  • @4etam
    @4etam 19 วันที่ผ่านมา

    hi, please tell me how can I make vae visible, I downloaded the file. safetensirs and placed it in the models/vae folder, but the node still doesn't see it

    • @4etam
      @4etam 19 วันที่ผ่านมา

      and can i invert the mask and replace the background in a full length portrait shot?

    • @risunobushi_ai
      @risunobushi_ai  14 วันที่ผ่านมา

      hi! did you refresh comfy after placing the models?
      you can invert masks by using a invert mask node, or by using the grow mask with blur "inverted mask" output

  • @generalawareness101
    @generalawareness101 3 วันที่ผ่านมา

    Do text. I don't mean on a sign I mean Image 1. Text "Hello" and out comes Image 1 with the "Hello" text that FLUX created overlayed.

  • @Art13eck
    @Art13eck 19 วันที่ผ่านมา

    but it's not a full-blown inpaint, it's just replacing one thing with another, it's a very simple thing....

  • @oonefilms
    @oonefilms 28 วันที่ผ่านมา

    Sorry, one more noobie question: I've downloaded Depth anything v2, but it keeps giving me this error even though I have a file in that folder: [Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\depth-anything\\Depth-Anything-V2-Large\\.cache\\huggingface\\download\\depth_anything_v2_vitl.pth.a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345.incomplete'

    • @risunobushi_ai
      @risunobushi_ai  28 วันที่ผ่านมา

      it looks like the node can't properly download the depth anything v2 model in its folder. try selecting a different depth anything model in the dropdown menu, like the s version, or you can change preprocessor to another depth estimator model (like midas, marigold, zoe, etc)