Quickly fix bad faces using inpaint

แชร์
ฝัง

ความคิดเห็น • 57

  • @thenewfoundation9822
    @thenewfoundation9822 ปีที่แล้ว +1

    Finally someone who gave a clear and working instruction on how to improve faces in SD. Thank you so much for this, it's really appreciated.

  • @MyAI6659
    @MyAI6659 ปีที่แล้ว

    You are one of the few people on youtube who actually know what he's talking about and how SD works. Much love Bernard.

  • @tristanwheeler2300
    @tristanwheeler2300 3 หลายเดือนก่อน +1

    oldie but goldie

  • @AscendantStoic
    @AscendantStoic ปีที่แล้ว +1

    Learning this trick is quite a game changer.

  • @exile_national
    @exile_national ปีที่แล้ว

    512*512 and keep original instead of fill in masked saved my day, thank you Sir Bernard! you are indeed the VERY BEST!

  • @hatuey6326
    @hatuey6326 ปีที่แล้ว +2

    magnifique enregistré dans mes favoris ! Cela va tellement améliorer mon workflow !!! Merci !!!!!

  • @Sandel99456
    @Sandel99456 ปีที่แล้ว +1

    The best informative tutorial 👌

  • @metasamsara
    @metasamsara 10 หลายเดือนก่อน +2

    thank you, really clear and concise tutorial

  • @toptalkstamil5435
    @toptalkstamil5435 ปีที่แล้ว

    Installed, everything works, thanks!

  • @mufeedco
    @mufeedco ปีที่แล้ว

    Thank you. great explanation.

  • @MonkeyDIvan
    @MonkeyDIvan 8 หลายเดือนก่อน +1

    Amazing stuff! Thank you!

  • @anuragkerketta6708
    @anuragkerketta6708 ปีที่แล้ว +1

    your tutorial did the job for me thanks a lot and subscribed.
    post regularly and i'm sure you'll get lot of followers quickly.

  • @Ilovebrushingmyhorse
    @Ilovebrushingmyhorse ปีที่แล้ว

    the 512x512 inpaint method helped dramatically, but i think the denoising shouldn't be so high if you want to maintain a similar image to before, the less you want to change the lower you should make it. i set mine all the way down to 0.01-0.05 just to add a little detail sometimes. also as far as i know keeping the same prompt isn't always necessary.

  • @syno3608
    @syno3608 ปีที่แล้ว

    Thank you so much.

  • @yogxoth1959
    @yogxoth1959 ปีที่แล้ว +1

    Thanks a lot!

  • @whyjordie
    @whyjordie 8 หลายเดือนก่อน +1

    this works!! thank you! i was struggling so hard lol

  • @339Memes
    @339Memes ปีที่แล้ว +2

    Wow, I didn't know you could change the width to do inpainting, thanks

    • @metasamsara
      @metasamsara 10 หลายเดือนก่อน

      Yes it's not obvious esp on mine for some reason it calls the dimensions section the "resize" feature, it isn't obvious that you can pick a custom dimension for when you render mask only.

  • @RikkTheGaijin
    @RikkTheGaijin ปีที่แล้ว

    thank you!

  • @quantumevolution4502
    @quantumevolution4502 ปีที่แล้ว

    Thank you

  • @kritikusnezopont8652
    @kritikusnezopont8652 ปีที่แล้ว +2

    Amazing tutorial, thanks! Also, while watching it, I noticed the VAE and hypernetwork selection options on the top. I'm just wondering if that is an extension or something, because I don't have those options on my Automatic1111. Where can we find those please? Thanks!

    • @mrzackcole
      @mrzackcole ปีที่แล้ว

      I also noticed that. Can't find it in extensions. Would love to know if anyone figures out where we can download it

  • @Sandel99456
    @Sandel99456 ปีที่แล้ว

    Is there a kohya documentation for the settings and what do they do

  • @MAASTEER007
    @MAASTEER007 14 วันที่ผ่านมา

    Thanks a lot

  • @arunuday8814
    @arunuday8814 ปีที่แล้ว

    Hi, I couldn't understand how you linked the inpainting to a specific custom model. Can you pls help explain? Thx a ton!

  • @baobabkoodaa
    @baobabkoodaa ปีที่แล้ว +2

    I'm unable to reproduce similar quality results. Can you share more details on what you did to achieve this level of quality? Are you running in half precision or full precision mode? Did you toggle on the "color corrections after inpainting" option in settings? Where did you get the Lora model for this? I tried all the Ana De Armas Loras in Civitai, but it looks like the one you used in this video was not on Civitai. I suspect that your Lora model is the "magic" here that allows good inpainting results, possibly in conjunction with some settings you have toggled on.

    • @mkaleborn
      @mkaleborn ปีที่แล้ว +6

      Not sure I can help on the Lora side. But with my vanilla Automatic1111 and custom Checkpoint Merges, I had good results with this workflow:
      1. Generate a prompt2image of a lady standing in some wooded/natural setting. Medium distance with a face that was decidedly 'sub-optimal' (I purposely did not do Hi-Rez upscaling).
      2. I Upscaled that original 512x768 image in the Extras tab - 2.5x, ESRGAN_4x (I've switched to this from SwinIR_4x), no other upscale settings changed (all default)
      3. I copied my entire Positive and Negative Prompt from the Prompt2Img tab over to Inpaint. Then copied my newly upscaled image to Inpaint. Same as he did in his video.
      4. I Masked out the model's entire face and a little bit of her hair (but not all of it)
      5. Sorry for the ugly formatting, but here are my Inpainting settings:
      Resize Mode: Just resize Mask Blur: 4, Mask Mode: Inpaint Masked (all defaults)
      Masked content: **Original** - I'm pretty sure *this* is the critical setting that needs to be selected for this to work. It keeps the original 'bad' face as a reference for general 'composition' when drawing the new face. Otherwise it will try to render the *entire* prompt, body and all, or just doesn't work properly.
      Inpaint Area: Only Masked (for the reasons he stated in the video, you only want it to focus on rendering your Masked area at the resolution you select below)
      Only masked padding, pixels: 64 - After some tests, I doubled the 'padding' value from 32 to 64. I found this helped the AI to 'see' the surrounding colour palette better, allowing the new face to 'blend in' better with her neck, shoulders, and overall skin tone
      Sampling Method: Euler A (same as prompt2img sampler), Sampling Steps: 60 (same as prompt2img)
      Width: 512 Height: 512 - for the exact reason he gave in the video
      CFG Scale: 7 (same as prompt2img). I didn't play with this setting, but I think it's fine left at the same level as your original render
      Denoising strength: 0.3, 0.7. My first attempt was at 0.7 and it was 'ok' or 'roughly Acceptable'. Until I lowered it down to 0.3 and tried again - I had much better results. A more natural fit for her neck and head position. Basically it used the original 'ugly face' as a closer reference point, but was able to render the whole face at 512x512 resolution
      Seed: -1 Restore Faces: Checked (I did not try it unchecked)
      And that was it. I think the flexibility probably comes with Denoising and CFG in how the image will look, and what variety you get with multiple renders. But a lower Denoising with a suitable "Only Masked Padding" set high enough to 'see' the surrounding area seemed to really help me get a face that blended in nicely with her body and the overall colour palette.
      Anyway, that's just my very brief and quick experience trying to fix some images that had 'broken' faces at medium / far model distances. Hope it helps!

    • @markdavidalcampado7784
      @markdavidalcampado7784 ปีที่แล้ว

      @@mkalebornIm gonna try this now. It looks promising! My greatest problem with inpaint is that blurry artifacts is too clear to see when image is upscaled. Any repair for that? sorry for my english. I wrote this for almost 15mins

    • @kneecaps2000
      @kneecaps2000 ปีที่แล้ว

      You must set the inpainting area to "mask only" and also set the resolution to 512x512.

  • @Doop3r
    @Doop3r ปีที่แล้ว +4

    I'm running into an odd issue. I'm following every single step to the T, and sometimes it works just as show here...other times instead of just the face in the masked area I'm getting a scrunched up version of the entire photo in the area instead of just the face.

    • @sestep09
      @sestep09 ปีที่แล้ว +1

      lowering the Denoising strength worked for me when i had this happen.

  • @sarpsomer
    @sarpsomer ปีที่แล้ว +1

    Another great step by step tutorial from you. Can someone explain what is "masked only padding, pixels =32" value for?

    • @stonebronson5
      @stonebronson5 ปีที่แล้ว +3

      As I understand, it is the area around the masked region that is looked up when it generates new image. So if you set it higher it will try to blend in better, if you set it lower, then it will make more drastic changes. Padding value only works when you set "Inpaint area" to "Only masked" since in the "Whole picture" mode the padding expands to the whole canvas.

    • @sarpsomer
      @sarpsomer ปีที่แล้ว +1

      @@stonebronson5 This is so helpful. It was similar to padding in design terminology; ex. css padding. Never thought about that.

    • @kneecaps2000
      @kneecaps2000 ปีที่แล้ว +1

      Yeah it's also called "feathering" ...just a gradient on the edge to avoid it looking cut and pasted in.

  • @syno3608
    @syno3608 ปีที่แล้ว

    Can we replace the face with a face of another Lora ?

  • @testales
    @testales ปีที่แล้ว +5

    Ok, now we only need a way to do this with hands in let's say under 100 attempts. ;-)

    • @subashchandra9557
      @subashchandra9557 ปีที่แล้ว

      You can use controlNet for that

    • @testales
      @testales ปีที่แล้ว

      ​@@subashchandra9557 Two weeks ago, when I wrote the comment, this wasn't common knowledge yet. ;-) Also having to create a fitting depthmap can still be somewhat labor intensive.

  • @roseydeep4896
    @roseydeep4896 ปีที่แล้ว +1

    Is there an extension that could do this automatically right after generating an image??? (I want to use this for videos, I need the frames to come out good right away)

    • @JJ-vp3bd
      @JJ-vp3bd 3 หลายเดือนก่อน

      did you find this?

  • @androidgamerxc
    @androidgamerxc ปีที่แล้ว

    how does you have sd vae aand hypernetwork

  • @s3bl31
    @s3bl31 8 หลายเดือนก่อน +2

    Dont work for me i dont know what the problem is in the preview i see a good face but in the last step i turns back to the bad face and the output is just a even worse oversharpend face

    • @marksanders3662
      @marksanders3662 3 หลายเดือนก่อน

      I have the same problem. Have you solved it?

    • @s3bl31
      @s3bl31 2 หลายเดือนก่อน

      @@marksanders3662 Are you using a amd card? if so i think i fixed with --no-half in the command line. But idk since its that long ago and i switched to Nvidia.

  • @p_p
    @p_p ปีที่แล้ว

    how you pasted the prompt like that at 0:35?

    • @BernardMaltais
      @BernardMaltais  ปีที่แล้ว +1

      I just dragged a copy of a previously generated images. The prompt and config info is stored as metadata in each images you create... so you can just drag them on the prompt feild and load them back in the interface that way.

    • @p_p
      @p_p ปีที่แล้ว

      @@BernardMaltais wait... whaaat?? ive been dragging into PNG info tab all the time for nothing lmao thanks you!

  • @progeman
    @progeman ปีที่แล้ว

    When i try this, exact the same as you showed, it tries to paint whole prompt to that small area of the face, doesnt work with me, could it be the model i use?

    • @progeman
      @progeman ปีที่แล้ว

      correction: needed the mask little bit more of the face, then it worked

    • @TutorialesGeekReal
      @TutorialesGeekReal ปีที่แล้ว

      How did you fix this? everytime i've tried it's always all the prompt drawing on that small area

    • @progeman
      @progeman ปีที่แล้ว +1

      @@TutorialesGeekReal try to lower the CFG scale, i put something like 0.4

  • @goldenboy3627
    @goldenboy3627 ปีที่แล้ว

    can this be used to fix hands?

  • @MarcioSilva-vf5wk
    @MarcioSilva-vf5wk ปีที่แล้ว

    Detection detailer extension do this automatically

    • @BakerChann
      @BakerChann ปีที่แล้ว

      how does it work? i found it to download but unsure where to put it or activate it.