Easy Inpainting for ANY model (SDXL, Flux, etc)

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ธ.ค. 2024

ความคิดเห็น • 73

  • @risunobushi_ai
    @risunobushi_ai  2 หลายเดือนก่อน +16

    I'm back, sorry for the wait!

  • @1lllllllll1
    @1lllllllll1 หลายเดือนก่อน +3

    Oh my golly, FINALLY a real teacher who actually EXPLAINS what is happening behind the scenes. Liked, subbed, and loved. I’m soooo tired of the millions of bs tuts out there that tell you nothing.
    Thanks a ton!!!

  • @aysenkocakabak7703
    @aysenkocakabak7703 27 วันที่ผ่านมา +1

    Sincerely telling i follow each of your videos, your artistic approach amazes me all the time. We are so lucky here that we have you. Your open source your knowledge is amazing.

    • @risunobushi_ai
      @risunobushi_ai  12 วันที่ผ่านมา

      thank you for the kind words!

  • @Lily-wr1nw
    @Lily-wr1nw 15 วันที่ผ่านมา +1

    learned a lot! Thanks master.

  • @abaj006
    @abaj006 หลายเดือนก่อน +1

    Very good tutorial, thanks for explaining the specific nodes.

  • @baheth3elmy16
    @baheth3elmy16 2 หลายเดือนก่อน +3

    Welcome back! Congratulations on the new job.

  • @zerobase9858
    @zerobase9858 2 หลายเดือนก่อน +1

    Hi! I really like your creative and meticulous workflow and your attitude towards licensing. Glad to see you back in action.

  • @bregsma
    @bregsma 2 หลายเดือนก่อน

    Thank you always for sharing your insight and everyone is congratulating you for your new job so congratulations as well!

  • @JoelB71
    @JoelB71 2 หลายเดือนก่อน +1

    We missed you! Thanks for another beautifully informative tutorial, and congratulations on your new position! They're lucky to have you :)

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      thank you! I missed doing videos too

  • @DanDanTheAiMan
    @DanDanTheAiMan 2 หลายเดือนก่อน +2

    Congrats on the new job!

  • @JohanAlfort
    @JohanAlfort หลายเดือนก่อน

    Really nice workflow and explanation, thanks :)

  • @antichitati.si.trandafiri
    @antichitati.si.trandafiri 2 หลายเดือนก่อน +1

    Congrats on your new job! I have been using Photoshop for 20 years, so I am looking to learn Flux also to expand my art techniques. Thank you for the tutorials!

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      thank you! while PS is great for the ease of use, I think that creating automated pipelines in comfy is better over large volumes that always need the same logic applied

  • @runebinder
    @runebinder 2 หลายเดือนก่อน

    Really nice detailed overview and clearly explained, thanks :)

  • @prodmas
    @prodmas 2 หลายเดือนก่อน +2

    Look for the Inpaint crop and stitch nodes. They do the same thing as your advanced workflow, but much easier.

  • @Mranshumansinghr
    @Mranshumansinghr 2 หลายเดือนก่อน

    Exactly what I was looking for. Its like you read my mind.

  • @kallamamran
    @kallamamran 2 หลายเดือนก่อน +1

    "Load & Resize Image" from KJnodes does loading, resizing/scaling (with multiple). It can replace your complete Input-group 😊Thanks for another great video

  • @Neotrixstdr
    @Neotrixstdr 2 หลายเดือนก่อน +1

    Great work!

  • @Zampano2
    @Zampano2 2 หลายเดือนก่อน +1

    Congratulations on the new job..! Hope they appreciate your knowledge... thanks for the workflow, looks like it's time to finally download that fat union-CN model... my SSD is crying...

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน +1

      Thank you! As suggested by another comment, you could use the Alimama inpainting ControlNet for flux, but it works differently and it’s not as “catch all” as depth or other controlnets in my testings.

  • @mauriziogastoni9779
    @mauriziogastoni9779 หลายเดือนก่อน

    Great stuff and a great explanation! I normally use the "prepare image for inpaint" to crop it and then the "overlay" node to stitch it back but I noticed that it keeps the original image proportions for the bounding box losing resolution. It doesn't look like it is the case here so I will probably update my workflows with this =) Thanks!

  • @Mranshumansinghr
    @Mranshumansinghr 2 หลายเดือนก่อน

    IC light Ver 2 is out. Can not wait for your next video.

  • @MaxRohowsky
    @MaxRohowsky 19 วันที่ผ่านมา

    dude, thanks for these videos! Really helped!
    Do you have any idea how I could change the view outside a window? I would like to keep the window and everything around it the same - just change the view... any idea?

    • @risunobushi_ai
      @risunobushi_ai  18 วันที่ผ่านมา

      if you can create a mask in something like Photoshop, you can import the mask separately. as long as it lines up with its image, you can inpaint over a separately loaded mask instead of drawing one in the open in mask editor window.
      create a mask only inside of the window, and after loading image and mask, you would need to adjust the controlnets' strength to taste, and inpaint only inside the window.

    • @MaxRohowsky
      @MaxRohowsky 18 วันที่ผ่านมา

      @@risunobushi_ai hey, thanks for the quick reply! The thing is that I'm programming a web app which needs to do all this automatically. I was hoping for there to be a ready to use model on replicate but it looks like I'll need to create a custom model for this :D

  • @baheth3elmy16
    @baheth3elmy16 หลายเดือนก่อน

    Thanks again, I'm returning to your video again. I have a question please. What setting do I change in the lower groups (flux and sdxl) that makes the generated preview/save image identical in size to the one I loaded and masked in the Input group. Thank you!!!!!!

  • @defidigest9
    @defidigest9 หลายเดือนก่อน

    I needed this

  • @ayakakamisato-ls8nu
    @ayakakamisato-ls8nu หลายเดือนก่อน

    great project

  • @ralfschwarzfischer3525
    @ralfschwarzfischer3525 17 วันที่ผ่านมา

    Hey, nice video. Have checked if they aspect ratio of the extracted area is influencing quality? And have you tested the workflow with 3.5?

  • @serasmartagne
    @serasmartagne 2 หลายเดือนก่อน

    I use the Apply Advanced Controlnet node in ComfyUI-Advanced-ControlNet by Kosinkadink, as that has an optional mask to control which regions are influenced by the depth map conditioning. In your example of inpainting large flowers over small ones, I would provide the inverted inpainting mask as an input mask to the Apply Advanced Controlnet node. The effect is that the masked conditioning helps the inference understand the context around the target inpaint area, but ignores the existing content inside the area.

  • @ArnaudSteinmetz
    @ArnaudSteinmetz 2 หลายเดือนก่อน +1

    Very information as usual!
    I'm wondering, why not use directly inpainting controlnets like the one from alimama ?

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน +1

      I debated showing them as well, but ultimately I decided against it because:
      - they’re not as straightforward to understand in terms of how they work (with depth it’s much easier to understand from the preprocessed image)
      - they’re not always as good as a custom ControlNet setup (for example, I had mixed results using them with face Loras / garment Loras combos)
      - they’re not always available for all models, or they might not be as quick in being released, so it wouldn’t have been a “catch-all”, easy solution
      But yeah, they’re a valid alternative depending on the usecase

  • @d4veejones53
    @d4veejones53 2 หลายเดือนก่อน

    Another great workflow by the looks of it! Although I get aksampler error - 'mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)'? Is this due to the original picture size or something being wrong with the Math1 and 2 nodes?

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      this is the error you get when you're trying to use a controlnet for a different model than it was designed for - so a SDXL controlnet with a FLUX model for example

  • @ValorantNexus
    @ValorantNexus 2 หลายเดือนก่อน

    thanks for the great info

  • @DarioToledo
    @DarioToledo 2 หลายเดือนก่อน +1

    I have seen some inpaint controlnets, like the alimama inpaint alpha (now beta) for flux. Any idea on how they should be implemented? Is it an alternative to the inpaintmodelconditioning node?

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      hi! Alimama's inpainting controlnet, AFAIK, doesn't need a preprocessor, and in my testing the higher the strength is, the more it forces the inpainting over the original image. but then again, I'm not an expert on inpaint controlnets, mainly because I find them too specific to what they were trained for, and I'd rather use less tools that are more suited to general use

  • @artemnikolski3197
    @artemnikolski3197 25 วันที่ผ่านมา

    KSampler freezes and reports an error.... any known solution for that?

  • @NinoLouLeChenadec
    @NinoLouLeChenadec 2 หลายเดือนก่อน

    Hi Andrea, Comfy is really great for flexibility between Lora and model, but for inpaint I prefer to use Invoke AI (UI local), have you try it ? Thxs for your work 🙌

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน +1

      I don’t use Invoke in my stack, mostly because the clients I work for like to implement comfy rather than anything else, or straight up use the API versions of the json files

  • @salomahal7287
    @salomahal7287 หลายเดือนก่อน

    Hey I like the idea i got a problem with it though, only 1 out of 3 seeds gives something that i asked for in either sdxl and flux dunno how this is a thing maybe models? but flux gives me really random results, also i was trying to implement the new daemon detailer node with a custom advanced sampler which also didnt really inpaint as wanted, is there a way to implement the sampler as an extra node in the standard ksampler used in ur workflow?

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน

      Did you test it before using detailer daemon or did you straight up used it alongside it? I haven’t tested detailer daemon yet, and AKAIK it works by using model shifts, and that’s a much more invasive approach than usual - so I wouldn’t trust it to be working properly with this kind of pipeline straight out of the box

    • @salomahal7287
      @salomahal7287 หลายเดือนก่อน

      @@risunobushi_ai I did run the workflow as is with flux dev and an inpaint model on the sdxl side, i wanted to inapint red points on the cap of a person, idk if thats a difficult task however both sides do whatever with the instruction, black logos or nothing at all, its kinda weird. omnigen was somewhat able to achieve it but after some trys it seems to me that in ur workflow the sampler just doesnt care about the text. mb its just me though...so its not a daemon detailer problem it seems

  • @panonesia
    @panonesia 2 หลายเดือนก่อน

    can we add lora to speedup process? lora turbo to make it 8 step? where to place it? before Differential Diffusion node or after?

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      yes you can, and usually you can apply is wherever, before or after differential diffusion. the only times I've had issues with the placing of differential diffusion was with specific versions of comfy while using ipadapter advanced, in which case differential should be either before or after the ipadapter, I don't remember which

  • @oonefilms
    @oonefilms 2 หลายเดือนก่อน

    I'm a bit lost about Inpainting itself - Do you just paint any area on the image with solid color like black and then open in in comfy?

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน +2

      hi! in order to inpaint, you can either:
      - input an image, and open it with the mask editor (right click on the image), then draw your mask, like in this video or
      - input an image, and input a custom mask (in this case you'd need to rewire the mask pipeline to account for that)

  • @antronero5970
    @antronero5970 2 หลายเดือนก่อน +1

    Yeah!

  • @erikdias9604
    @erikdias9604 2 หลายเดือนก่อน

    Question: First, thank you for your video and your explanations.
    In Photoshop, if I have an arm or something else too many: I select and click on generation without doing anything else.
    In Flux ComfyUi, I am confused. I am a beginner and I would have liked to be able to select the part to delete like in PS but I am not sure I understood that it is possible via your video (I have problems understanding, so it does not come from you ^^; ).
    Thanks again for your work, it helps me a lot.

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      Hi! In your specific case, you’d want to use a very low ControlNet strength, because you don’t want to follow the underlying picture too much - otherwise, if you did the opposite, you would always get something following the depth of the extra arm.
      It’s possible, it just takes a bit of time adjusting to it!

  • @4etam
    @4etam หลายเดือนก่อน

    hi, please tell me how can I make vae visible, I downloaded the file. safetensirs and placed it in the models/vae folder, but the node still doesn't see it

    • @4etam
      @4etam หลายเดือนก่อน

      and can i invert the mask and replace the background in a full length portrait shot?

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน

      hi! did you refresh comfy after placing the models?
      you can invert masks by using a invert mask node, or by using the grow mask with blur "inverted mask" output

  • @casperd2100
    @casperd2100 หลายเดือนก่อน

    Hi, sorry but I'm super new at this. I'm getting missing node errors:
    ---
    Missing Node Types
    When loading the graph, the following node types were not found
    UnetLoaderGGUF
    GetImageSize+
    DepthAnythingV2Preprocessor
    SimpleMath+
    ImageResize+
    GrowMaskWithBlur
    ---
    Do I have to install some extensions to get these nodes to work?

    • @risunobushi_ai
      @risunobushi_ai  หลายเดือนก่อน

      hi! you need to go into the manager (if you don't have it installed, get it from here: github.com/ltdrdata/ComfyUI-Manager ) and install the missing custom nodes. once that's done, you should install any model that you're missing, so for example, in the GGUF node you'll be missing a quantized version of flux dev, found here: huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf
      usually if you load a workflow, look up the missing models in google, and look at their docs, you should be able to find them and placing them where they should be

    • @casperd2100
      @casperd2100 หลายเดือนก่อน

      I found the extension needed for each node type:
      UnetLoaderGGUF - ComfyUI-GGUF
      GetImageSize+, ImageResize+ - Image Resize for ComfyUI
      DepthAnythingV2Preprocessor - ComfyUI's ControlNet Auxiliary Preprocessors
      SimpleMath+ - SimpleMath
      GrowMaskWithBlur - ComfyUI-KJNodes

  • @titanoplastik
    @titanoplastik หลายเดือนก่อน

    Hello, I'm encountering the following error right at the beginning:
    Prompt outputs failed validation
    SimpleMath+:
    - Return type mismatch between linked nodes: a, INT != INT,FLOAT
    SimpleMath+:
    - Return type mismatch between linked nodes: a, INT != INT,FLOAT
    Can you give me a tip on how to fix this?

    • @titanoplastik
      @titanoplastik หลายเดือนก่อน

      I solved it by simply using the Utils Math Expression node instead.

  • @FEILIU-m6c
    @FEILIU-m6c หลายเดือนก่อน

    👍👍👍

  • @generalawareness101
    @generalawareness101 หลายเดือนก่อน

    Do text. I don't mean on a sign I mean Image 1. Text "Hello" and out comes Image 1 with the "Hello" text that FLUX created overlayed.

  • @Art13eck
    @Art13eck หลายเดือนก่อน

    but it's not a full-blown inpaint, it's just replacing one thing with another, it's a very simple thing....

    • @Lily-wr1nw
      @Lily-wr1nw 15 วันที่ผ่านมา

      Wdym, can you please explain. I am a noob,sorry

  • @oonefilms
    @oonefilms 2 หลายเดือนก่อน

    Sorry, one more noobie question: I've downloaded Depth anything v2, but it keeps giving me this error even though I have a file in that folder: [Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\depth-anything\\Depth-Anything-V2-Large\\.cache\\huggingface\\download\\depth_anything_v2_vitl.pth.a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345.incomplete'

    • @risunobushi_ai
      @risunobushi_ai  2 หลายเดือนก่อน

      it looks like the node can't properly download the depth anything v2 model in its folder. try selecting a different depth anything model in the dropdown menu, like the s version, or you can change preprocessor to another depth estimator model (like midas, marigold, zoe, etc)