How to FACE-SWAP with Stable Diffusion and ControlNet. Simple and flexible.

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ธ.ค. 2023
  • We use Stable Diffusion Automatic1111 to swap some very different faces. Learn about ControlNet IP-Adapter Plus-Face and its use cases. This solution requires two additional ControlNet models only, and no large installation packages like other tools for swapping faces.
    ControlNet:
    github.com/Mikubill/sd-webui-...
    IP-Adapter models including plus-face:
    huggingface.co/h94/IP-Adapter...
    Open-pose-models for ControlNet:
    huggingface.co/lllyasviel/Con...
    --
    My Video about Upscaling and ControlNet:
    • How to UPSCALE with St...
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 83

  • @kdzvocalcovers3516
    @kdzvocalcovers3516 6 หลายเดือนก่อน +5

    You the man!...concise..informative..nonsense free lesson...pleasantly mixed audio and a relaxing...enjoyable presentation..Bravo!..and Thank you*

    • @NextTechandAI
      @NextTechandAI  6 หลายเดือนก่อน +1

      Wow - thank you very much for this motivating and inspiring feedback!

    • @kdzvocalcovers3516
      @kdzvocalcovers3516 6 หลายเดือนก่อน

      @@NextTechandAI..it was well earned my friend..thanks again for the great tutorial!

  • @Chonky_Nerd
    @Chonky_Nerd 6 หลายเดือนก่อน +2

    I love your videos, thank you for your help so far You have helped me install and learn to use SDXL, Upscalers and ControlNet!

    • @NextTechandAI
      @NextTechandAI  6 หลายเดือนก่อน

      Your feedback is the best motivation for me to make more videos. Thanks for that!

  • @RobertWildling
    @RobertWildling 5 หลายเดือนก่อน +1

    Excellent! Really great tutorial! Thank you very much! Subscribed and looking forward to learn from your other videos! 🙂

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      Thanks a lot for your feedback and the sub. I'm definitely very motivated for my next video :)

  • @pilotdawn1661
    @pilotdawn1661 หลายเดือนก่อน

    Very helpful. Liked and Subscribed. Thanks!

    • @NextTechandAI
      @NextTechandAI  หลายเดือนก่อน

      Thanks a lot for the like and the sub. I'm happy that my vid was helpful.

  • @dustinpoissant
    @dustinpoissant 5 หลายเดือนก่อน +4

    Yeah my faces end up looking VERY abstract, like completely wrong

  • @mhacksunknown2229
    @mhacksunknown2229 4 หลายเดือนก่อน +3

    i testing every step but it never worked..

  • @calurodriguezrome888
    @calurodriguezrome888 4 หลายเดือนก่อน

    Awesome, thanks.

  • @faredit-cq2xl
    @faredit-cq2xl หลายเดือนก่อน +1

    thank's .
    did you know how can i use this method in Forge UI ? in Forge Preprocessor is `InsightFace+CLIP-H` and Model is `ip-adapter-plus-face_sd15` , i Don't know how to use `ip-adapter-clip_sd15` as preprocessor !

  • @trashmike7642
    @trashmike7642 5 หลายเดือนก่อน

    Great video, thank you for sharing! Is this possible or even advised on Mac OS?

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      Thanks a lot for your feedback! As far as I know it's possible to install Automatic1111 WebUI on Mac OS, only support for GPU might be a bit tricky. The extension ControlNet seems to work, too. So, yes :)

  • @Oxes
    @Oxes 4 หลายเดือนก่อน

    very very well explained every step, subbed, can you make some controlnet for sdxl? of course inside in automatic1111

    • @NextTechandAI
      @NextTechandAI  4 หลายเดือนก่อน +1

      Thank you very much. I'll add it to my list. For sure you will need lots of VRAM :)

  • @fpvx3922
    @fpvx3922 3 หลายเดือนก่อน

    Thanks, this was useful :)

    • @NextTechandAI
      @NextTechandAI  3 หลายเดือนก่อน

      Happy to read this, thanks for your feedback :)

    • @fpvx3922
      @fpvx3922 3 หลายเดือนก่อน

      @@NextTechandAI Thanks you were quite civil about my critique on the other video. I valued that, thanks as well.

    • @NextTechandAI
      @NextTechandAI  3 หลายเดือนก่อน

      @fpvx3922 Constructive criticism like yours is valuable. Even if I don't always agree, it's an opportunity to grow. Thank you also for this feedback!

  • @siewertw
    @siewertw 5 หลายเดือนก่อน +4

    I tried your instructions and copied the settings. I can't get it right. When i set denoising strength to 1 the face is replaced by a random generated image and not the controlnet image. I think i'm missing a setting that i may not see in your video?

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      Everything should be in the video. Have you enabled the ControlNet unit? I assume you have selected the img2img tab, right?

    • @mariorancheroni9427
      @mariorancheroni9427 5 หลายเดือนก่อน +3

      me to, not the same face at all, did you solve?

    • @MONGIE30
      @MONGIE30 5 หลายเดือนก่อน +1

      Same for me

  • @zaselimgamingvideos6881
    @zaselimgamingvideos6881 4 หลายเดือนก่อน

    I don't know, for me nothing beats reactor, ip adapter sucks even with plus 2 version. The best way i get the desired result is with using reactor (txt2img) with any model with their specific cfg/steps etc and then img2img with my trained model with its cfg/steps/method settings without reactor with 0.17 denoiser with 2x size.
    After img2img the face gets the same style as the rest of the picture without changing its appearance.
    But with ip adapter, the face doesn't even come close to 50%. And i tried with single to 20 pictures for the face so that ip adapter can get it right but still fails. I am waiting for instant id, it only works with sdxl at the moment and i use sd1.5.

  • @marcusmeins1839
    @marcusmeins1839 5 หลายเดือนก่อน

    Do you have a tutorial about training loRAs models on stable diffusion with dreambooth? i tried it with other tutorials but the interface they had was different and mine is different too and it is hard to follow their steps.

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน +1

      @marcusmeins1839 Currently not, but I'll put it on my list for video ideas. You're using Automatic1111, I guess?

    • @marcusmeins1839
      @marcusmeins1839 5 หลายเดือนก่อน

      @@NextTechandAI yes, locally

  • @QuackCow144
    @QuackCow144 หลายเดือนก่อน

    Mine doesn't have "DPM++ 2M SDE Karras" for the sampling method. Mine has "DPM++ 2M SDE" and "DPM++ 2M SDE Heun". Which one of these should I use?

    • @NextTechandAI
      @NextTechandAI  หลายเดือนก่อน

      It depends on your version of Automatic1111. You could try to update if this doesn't hurt your installation. In latest versions you have to select e.g. Karras in a list right from the sampling method or leave it on "automatic". If this is no option for you, start with DPM++ 2M SDE.

  • @amarissimus29
    @amarissimus29 3 หลายเดือนก่อน

    RuntimeError: 'addmm_impl_cpu_' not implemented for 'Half' Still trying to work this one out... Seems to be working fine until the final output. Then error. I wonder if I've got something in the wrong folder. That happens often.

    • @NextTechandAI
      @NextTechandAI  3 หลายเดือนก่อน

      Half/float16 is for GPU and I guess your stable diffusion is using CPU. You have to use GPU or change to using "Full" (float32) instead of Half.

  • @KotleKettle
    @KotleKettle 5 หลายเดือนก่อน +3

    When I select IP-adapter, ip-adapter_clip_sd15 disappears from Preprocessor, and only xl is available. What do I do wrong?

    • @KotleKettle
      @KotleKettle 5 หลายเดือนก่อน +1

      I found what the problem was. For some reason, ip-adapter_clip_sd15 doesn't appear with my model, but it does with the one you used. You should've told about that.

    • @captainblackbeard9104
      @captainblackbeard9104 4 หลายเดือนก่อน

      Because it is designed for SD 1.5 models, and you probably used it on SDXL, I think.@@KotleKettle

    • @Cayane-md1tn
      @Cayane-md1tn 8 วันที่ผ่านมา

      Same problem here, missing ip adapter sd15 in preprocessor. What is the solution?

  • @Azmortsu
    @Azmortsu 3 หลายเดือนก่อน

    I followed every step but I get just the original image with no face, like if I've deleted it with paint where I drew the mask, and a pose image.

    • @NextTechandAI
      @NextTechandAI  3 หลายเดือนก่อน

      Have you checked whether the ControlNet component is active? Are there any error messages in the window of your automatic1111 server?

  • @bigdeutsch5588
    @bigdeutsch5588 2 หลายเดือนก่อน

    Unfortunately I ran into a similar issue as others described in the comments. My preprocessor will not update to show the same ones available for you, I'm not sure which preprocessor I should use so every time I run this it looks horrible. I have tried updating control net, using different models, including the base 1.5 model. I'm not sure what else I can try. I'm sure I've followed the video's instructions to the letter.
    Any ideas? My model shows up correctly, but not the preprocessor.

    • @NextTechandAI
      @NextTechandAI  2 หลายเดือนก่อน +1

      Thant's strange. I followed my own instructions with my relatively new Automatic1111 Zluda installation (see related short and vids) and the result was exactly like in the video. I noticed only one difference: The preprocessor being offered was called ip-adapter-auto. When doing the first run, the WebUI automatically downloaded clip_vision\clip_h. Setting the preprocessor to ip-adapter_clip_h for next runs resulted in same (good) results.
      Maybe the preprocessor's name is included in WebUI and you have to update WebUI. BUT if your generation works as expected I would choose ip-adapter-auto or ip-adapter-clip_h without touching the existing installation.

    • @bigdeutsch5588
      @bigdeutsch5588 2 หลายเดือนก่อน +1

      @@NextTechandAI hi thanks for the reply, yes I have auto, and h, but I had issues with the quality of the result, I will retry again tonight and let you know how it goes.

    • @bigdeutsch5588
      @bigdeutsch5588 2 หลายเดือนก่อน

      @@NextTechandAI So I tried again with the clip_h preprocessor, and experimented quite a bit with different settings to see if I could attain anything usable. I tried altering the source image, control weight, starting step, ending step (which was basically useless? anything less than 1.0 gave me results that looked nothing like the control image). I also altered the sampling steps, and the method.
      I really tried but couldn't get anything useable.
      Using an 7800 xt, SD1.5 installed as per your March 2024 video.

  • @ligmuhnugs
    @ligmuhnugs 5 หลายเดือนก่อน +1

    When I do this process, I get a blended face. I'll have the expression of the original with the colors and shape of the new. It looks bad. I'd like to have a link to the images you use, so I can truly replicate your process.
    Also a problem with combining images like this is skin tones don't match.

    • @ichhassdievoll
      @ichhassdievoll 5 หลายเดือนก่อน

      i also have issius tried some other faces tried playing with settings no luck. often i get totaly defomed faces or just a hairy mess

    • @ichhassdievoll
      @ichhassdievoll 5 หลายเดือนก่อน

      updateed xformer and pytorch and found reactor. I now use a mixture of controllnet with ip-adapter, open-pose and reactor- works fine

  • @juliensorel5535
    @juliensorel5535 19 วันที่ผ่านมา

    Is this process done entirely on the computer? Or does it upload/download to and from a website or the cloud? I have very bad IN connection, and the apps I have tried all end up using a website.

    • @NextTechandAI
      @NextTechandAI  19 วันที่ผ่านมา

      @juliensorel5535 It's done completely local. No need to access the web/cloud. You only need to download the extensions and the checkpoints.

    • @juliensorel5535
      @juliensorel5535 18 วันที่ผ่านมา

      @@NextTechandAI Thank you. I will give it a try.

  • @arnaudcaplier7909
    @arnaudcaplier7909 5 หลายเดือนก่อน

    Very well done ! The accuracy and explanations / « whys » are highly valuable. May be the German way!
    I definitely subscribe to your channel and activate the notification mode ;) 👏 👏 👏
    Looking forward

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      Thank you very much for the inspiring feedback and the sub! I'm very happy that the video and my 'German way' are helpful :)

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 5 หลายเดือนก่อน

    can this be done with sdxl?

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      You need the models and ControlNet files for SDXL - and depending on the size lots of VRAM.

  • @mhacksunknown2229
    @mhacksunknown2229 4 หลายเดือนก่อน

    i subscribe and comment but my result aren't tha i excepted can you elp me please ?

    • @NextTechandAI
      @NextTechandAI  4 หลายเดือนก่อน

      Give more details. What exactly have you done and what was the result?

  • @necrolydevlogs3932
    @necrolydevlogs3932 5 หลายเดือนก่อน

    Finally after 4 days found this video and it finally works, But now i got a new issue, The eyes are asian now. Both models have almost nearly the same eyes so not sure why it becomes asian eyes help :3

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      Thanks for your feedback. I've never heard about such a strange issue. I guess, you are working on the img2img tab, have tried different values for control weight, set denoising strength to 1 and left the prompt empty?
      It sounds crazy, but maybe the default face of your checkpoint is asian and somehow it is mixed with your models. So enter a prompt describing your target model and please report back :)

    • @necrolydevlogs3932
      @necrolydevlogs3932 5 หลายเดือนก่อน

      Idk iu was just following every step you were done, including the prompt etc@@NextTechandAI

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      So entering a prompt describing your target model or adding 'asian' to the negative prompt does not help? @necrolydevlogs3932

    • @necrolydevlogs3932
      @necrolydevlogs3932 5 หลายเดือนก่อน

      correct, Neg prompt :Asian then tried, No asians, then Asian eyes etc@@NextTechandAI

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน

      @necrolydevlogs3932 Very strange. I'm running out of ideas, you could try other checkpoints, although that connection is pretty far-fetched.

  • @aresic34
    @aresic34 6 หลายเดือนก่อน +1

    But why ? we already have tools like roop or reactor thar are way easier to use and give amazing result . Why would you use so much steps to achieve something that is not even better.

    • @NextTechandAI
      @NextTechandAI  6 หลายเดือนก่อน +3

      Thanks for asking. Both Roop and Reactor are based on models that do not allow commercial use. If this is no limitation for you - feel free. Additionally Reactor, a sort of "successor" to Roop (no longer maintained), requires Microsoft Visual Studio to be installed - which seems like a bit of overkill.
      Against this background, the IP adapter face-plus delivers decent results.

    • @kdzvocalcovers3516
      @kdzvocalcovers3516 6 หลายเดือนก่อน +3

      i dont think Roop or Reactor produce a more accurate face swap then this method..i tried the other 2 and my results were far from exact face swaps..plus this method does not require prompts to execute such good results..just my opinion...

    • @NextTechandAI
      @NextTechandAI  6 หลายเดือนก่อน +2

      I see it the same way. Thank you for sharing your opinion.

    • @aresic34
      @aresic34 6 หลายเดือนก่อน +1

      @@NextTechandAI thanks interesting 👍

    • @aresic34
      @aresic34 6 หลายเดือนก่อน

      @@kdzvocalcovers3516 will give it a try then ty

  • @wittwickey
    @wittwickey 5 หลายเดือนก่อน

    Good Content.. Really Loved it.. Do you have any page on social media ?

    • @NextTechandAI
      @NextTechandAI  5 หลายเดือนก่อน +1

      Thanks a lot for this motivating feedback. I'm still working on my online presence. As soon as my social media pages are available, I will create a post in the Community tab.

    • @wittwickey
      @wittwickey 5 หลายเดือนก่อน +1

      @@NextTechandAI we People trying to interact with you to create more effective content. It's Helpful to us

  • @alinasama721
    @alinasama721 4 หลายเดือนก่อน

    how to fix this error? "AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'"

    • @NextTechandAI
      @NextTechandAI  4 หลายเดือนก่อน

      Please add some details. What exactly have you done in order to get this error, what's your hardware and especially, which torch-version are you using?

    • @rubencastro3854
      @rubencastro3854 4 หลายเดือนก่อน

      Did you fix it? I have same error. Please. I have 6gb vram. Images 1000x1000

  • @Wakssbm
    @Wakssbm 2 หลายเดือนก่อน +1

    ip-adaptor_clip_sd15 does not appear at 3:15 and I don't know why?

    • @NextTechandAI
      @NextTechandAI  2 หลายเดือนก่อน

      I guess to the right there is no ip-adapter-plus-face, either? Have you copied the ControlNet file into the correct directory? Are you mixing SDXL with SD15?

    • @Wakssbm
      @Wakssbm 2 หลายเดือนก่อน

      ​@@NextTechandAI Well I'm very new to stable diffusion so I don't know how if I'm mixing SDXL with SD15. On the right there is ip-adapter-plus-face as an option. I'm confident I put everything in the right directory. I decided to put it on ip-adapter-auto, and on the left with ip-adapter-plus-face on the right, and saw in the command prompt that, somehow, the preprocessor is ip-adapter-plus-face_sd15.
      It's probably due to my lack of knowledge on SDXL and SD15 which I'm probably mixing but I don't know what are these settings nor how to change them / avoid mixing them.
      In all cases, after watching a whole bunch of face swap videos, yours came up on top! After slightly twitching a few settings over your recommendations, plus using multi inputs instead of a single image in ControlNet, it really helped me out having more than a single angle for my virtual character. Thanks a lot!

    • @NextTechandAI
      @NextTechandAI  2 หลายเดือนก่อน

      Well, if you could select ip-adapter-plus-face-sd15, then your selection is correct. Regarding SDXL and SD15, some people seem to have trouble as they have selected the ControlNet files for Stable Diffusion 1.5 like in my video and combined it with some checkpoints for Stable Diffusion XL. When you download checkpoints from HuggingFace or Civitai, you'll see whether it's for 1.5 or XL.
      Nevertheless, I'm glad that you managed to modify your virtual character - thanks for your feedback!