Dude. Why have you burned subtitles into this video!? You probably have no idea how distracting it is for people with ADHD. There's a big button to enable closed caption inside a YT video (which YT generates automatically) on every single platform and its turned off by default for a reason. I can't get anything from this tutorial with those subs burned in. Love the content, Sebastian. Been watching for a long time. But this one, and any others with burned subs, is a no go for me and possibly others. Shame. I couldn't wait to dig in to this one.
I had been aware of this extension for a few days, but after looking at some other guides (and being confused) I decided to wait for yours, which as anticipated was clear, concise and comprehensive. Thanks Sebastian.
One thing nice about using Ip-adapter first, to create the base image, is that it creates a good image to use Reactor for a face swap. So if you want an image that very closely resembles yourself, just use Reactor to swap your face back on there. :)
Woh ! Thanks for this very useful video, such a good job ! I'm a newbie and your videos are perfect to me. I've tried and it's amazing, even though for now all the characters in the image looks the same 😂
set it up exactly as he did, not even close to looking like my input images, seems to be alot of how to videos on sd, not any of them will give the same results as posted. moving on to the next one.
Hello Sebastian. Thank you very much for your insightfull tutorials. I started playing with SD thanks to you. I just recently installed forge as instructed in your video but how do I install IP adapter plus on forge? I did the same as in this tutorial but I dont get the proper selection for the preprocessor. I can select ip adapter plus for the model but not for the preprocessor.
by the way, as usual this is a great tutorial. As I had to try reinstalling reactor I wanted to follow along with a tut to make sure I was doing it right, couldn't find yours at first, the alternative one was terrible so I persevered until I found yours which is great! You are the best!
nice! to the people saying it doesnt work , check if there is the error "("Insightface: No face found in image.")" then increase the padding around faces in the images to help the face detection model, also just to be sure make the controlnet images bigger than 512px 🤗
Don't forget to enable your Controlnet module which very subtlety happens here. 4:34 And I was just wondering why my images didn't look anything like me. 😂 Other than that, great tutorial !! Thanks! 👍
I've been playing with it and the main issues is the lora can drastically change your image even if you set it to lower values. Reactor isn't dead yet. 😊
Yeah, some loras work great but others effectively destroy the person. That's what you get with community based content though, it's usually hit or miss in terms of quality.
Hey this is a great video, thanks for sharing. Can you advised on how to make the backgrounds clear and not blurry as every image generated leaves the background out of focus.
Great tutorial. Works really well with SD1.5, but with SDXL - bodies and faces seem way off and very unrealistic (despite negatives), unless it's close-up portrait facing the camera. Instant-ID gives me similar results with SDXL, not great for anything but portraits.
You're definitely on to something concerning lowering the weight of the lora. The higher the weight, the more it wants to zoom whatever picture you're generating into a portrait. If you lower the weight, it zooms out so there's more going on. (aka me riding a unicorn and wielding a sword riding into battle)
If I met you in a dark alley at night it would be a pun-ishing time for both. 🙂 I stumbled onto this through trial and error but was missing the settings to get good results. Cheers.
@@sebastiankamph I get much better likeness, and it's super good for mixing faces. Realisticvision5, ddpm, 50-60 steps, 4-5 cfg, weight at 1.0, 0-1.0. I think that the input images are very important. I use five, and they're all 768x768 taken from almost the same angle and distance, with a little bit of distance around the head. Using comfy. I guess this all might depend on the subject. I haven't tried it with many different faces. I never had much luck with sdxl anyway :\
Unfortunately everybody who shows face swaps only shows them with portrait images as the target. It would be much more interesting to know whether this also works sufficiently with scenes where the environment is more important, and the person is shown with a full body, meaning the target face is significantly smaller ... !?
Did you find a solution? I get the same. I also tried using with and without ipadapter and the results are the same. I think maybe this has recently broken? I only recently downloaded all the latest versions of everything but this ipadapter face thing doesn't seem to work.
Very strange work of the controlnet, the IP adapter V2 worked a couple of times and then began to produce completely worthless results completely different from the sample photos that I uploaded.
First, thanks for the video, your channel is really helpful in keeping up to date with all changes, and you explain everything very well. I've been using Reactor for a few months to generate a consistent face for a model, and I really like it. Decided to give this a try, and I was expecting to get similar results with this method (but somehow better and/or more consistent), however that's not what happened. The face swaps I'm getting with IpAdapter are very different from the results I get with Reactor, it seems like another person, which makes this method not very useful for me (at least for this particular project, which already has a strongly defined face). Is this normal? Any tips to improve? For now I will keep using Reactor, since it's giving me better results.
@@zoro_uchiha777 With Forge the default IP-Adapter preprocessor called "InsightFace+CLIP-H (IPAdapter)" works just fine. Just follow what he does with everything else and it will work. Also, you might not need to use the LoRA, but if you do I find that switching the number at the end from 1 to 0.4 gives better results.
i got it all installed im just trying to get it to look like my photo but it comes out nothing like them XD so i gotta play around see what i did wrong , great guide tho it looks great even tho its not me XD
Why Starting Control Step and Ending Control Step sometimes doesn't affect output render at all. And if I set 0.2 - 0.8 as in your example, I have zero resemblance on ref images 😂😂😂 And should I change extension of .bin to .pth? Or atm it doesn't matter? Thanks
you please show us how to merge sdxl train model with other sdxl model... like we do with sd 1.5 checkpoint merger where in A we use to put our train model on B we use to put Photon or dreamshaper and on C pruned checkpoint, and as VAE 56000 or 84000 so we want to see tutorial on SDXL
i really like your voice tone, i have feel that i can slowly drink coffee and really focus on job. Other youtubers just spitting and screeming on you just to make the video "more intresting". Thanks.
for some reason I am having difficulty adding the upgraded version of controlnet to ForgeUI, anyone else having this issue? - may be conflicting due to already build in controlnet?
Man, controlnet is not working at all for me. I have installed the extensions and all of its models and uploaded them and everything but stablefusion is completely ignoring it. Yes I pressed enable and did everything from the videos. Please any advice or a discord server or anyone to whom I can share screen so he can maybe see where the problem is?
I had the same case. But I resolved it after changing/playing with the parameters. The given parameters for ip-adapter-faceid- plusv2-sdxl parameters didn't work with me. And I went for ip-adapter-faceid- plusv2-sd15 paramaters. It didn't work with the parameters Sebastian gave in the video. It started working after I changed the CFG from 1.5 to 5 and the Sampling to 20. And only then my SD started recognizing Controlnet. I was able revert it to what Sebastian suggested in the video once it started working. My best results came with: ip-adapter-faceid- plusv2-sd15 - Sampling 30 - CFG 8 - Control Weight 1.0
Hello, thank you for the tutorial, i am using forge, i am unable to find the proper pre processor in my list or online, i have InsightFace+CLIP-H (IPAdapter) only. it isnt the same as you
Drives me crazy, I am trying and i keep getting an error Exception: Insightface: No face found in image. its like control net doesnt really do anything and i made sure its enabled, uploaded 6 photos 1000x1000 very clear face in them... any ideas?
Are there any requirements for the quality or type of image used as an input? Also, is the amount of input images important? I can't get any good results. At times, you can't even recognize anything at all.
I'm guessing this is not going to help for groups? I'm trying to recreate family pictures of us on the moon and more haha, does this work with the fork that uses directml for AMD GPUs? it does not seem to be taking my pictures in consideration
Hello Sebastian. Thank you for sharing. I tried with Reactor and the ipadapter on SD1.5 txt2img under the same prompt (with lora when using Controlnet). When I use the ipadapter, the photo becomes so blurred like a glare covered. What parameters should I adjust?
if my root controlnet folder is "models/Controlnet preprocessor" and "models/Controlnet" what folder im need to use for import preprocessor? im put safetensor file in both of them and models/lora but custom preprocessor don't adding, custom models adding well
Among all the other similar face-transfer systems (Roop, ReActor, InstantID) this one seems to give me by far the worst results. The likeness is really bad and the overall image quality is terrible.
There is no reason to use this it gives tragic results, I try all the time to get some good effect with SDXL and the same poor results, all the time quality problems.
Has this been disabled? I had this working down to perfection for my face but took 4 weeks away and now it doesn't work, it just renders random guys faces even when I use all my same pics
someone help me with this solution please, when I choose the multiple option this is what appears '' loadsave.cpp:1121: error: (-215:Assertion failed) !image.empty() in function 'cv::imencode' '''
5:38 Could you please make an video on styles, your patrion page is paid so it would be helpfull for free users, Thanks for the tutorial Following you closely 👏😁
I'm working on ComfyUI (and SDXL) since weeks. I think i installed ReActor node just before we started to read eveywhere about ipadapter.... so i was wondering : About face swapping... Is ipadapter better than ReActor ? or is it mostly the same ? Thanks if someone here tryed them both and could answer me ! I'd like to be sure first before i decide to uninstall ReActor and choose instead the other one. And thank you for your new video Sebastian !
Ok; Good to know. I've only just started using it and I had the impression that ReActor had trouble matching the chosen face to the expression of the target face. But I'll have to do more tests to verify this. Thanks a lot for your reply!@@sebastiankamph
Text & image guide for Patreon supporters www.patreon.com/posts/use-same-face-ip-98117124
👋
Dude. Why have you burned subtitles into this video!? You probably have no idea how distracting it is for people with ADHD. There's a big button to enable closed caption inside a YT video (which YT generates automatically) on every single platform and its turned off by default for a reason. I can't get anything from this tutorial with those subs burned in. Love the content, Sebastian. Been watching for a long time. But this one, and any others with burned subs, is a no go for me and possibly others.
Shame. I couldn't wait to dig in to this one.
Great vid. Can you use the new Stable diffusion Web UI Forge with this? Automatic 1111 is a mess.
I had been aware of this extension for a few days, but after looking at some other guides (and being confused) I decided to wait for yours, which as anticipated was clear, concise and comprehensive. Thanks Sebastian.
Happy to hear that! :)
Good thing this is available for SD 1.5. Not everyone is able to catch up with SDXL.
One thing nice about using Ip-adapter first, to create the base image, is that it creates a good image to use Reactor for a face swap. So if you want an image that very closely resembles yourself, just use Reactor to swap your face back on there. :)
That's actually... pretty clever 😅🌟
This is an option, but so what if using this adapter with Lora significantly spoils the render quality? It's probably not worth it.
Great tip. I'm getting better results with reactor than using controlnet.
Yes, but only if you're aiming to generate a realistic image. Reactor is not good when you wanna make a very deformed version of someone.
8:29 The edit with the music is just too funny man I actually had to laugh 😂
Thank you for the tutorial! :)
Woh ! Thanks for this very useful video, such a good job ! I'm a newbie and your videos are perfect to me.
I've tried and it's amazing, even though for now all the characters in the image looks the same 😂
set it up exactly as he did, not even close to looking like my input images, seems to be alot of how to videos on sd, not any of them will give the same results as posted. moving on to the next one.
Same here.
Seems like a good way to test your input images before making a textual-inversion or lora.
Hi sir, thank you so much for video, you always bring best tutorial stable diffusion.
So nice of you, thank you :)
You explain it perfectly!
Hello Sebastian. Thank you very much for your insightfull tutorials. I started playing with SD thanks to you. I just recently installed forge as instructed in your video but how do I install IP adapter plus on forge? I did the same as in this tutorial but I dont get the proper selection for the preprocessor. I can select ip adapter plus for the model but not for the preprocessor.
I'm facing the same problem. Did you find a solution?
same here
@@raphalopes495
Same problem here@@raphalopes495
Any luck?
+sebastiankamph Questionmark.
by the way, as usual this is a great tutorial. As I had to try reinstalling reactor I wanted to follow along with a tut to make sure I was doing it right, couldn't find yours at first, the alternative one was terrible so I persevered until I found yours which is great! You are the best!
I'm glad you finally found mine and that it helped you get things running! Good to know my guides are preferred :)
@@sebastiankamph yeah, even with your dad jokes. 🤣
nice! to the people saying it doesnt work , check if there is the error "("Insightface: No face found in image.")"
then increase the padding around faces in the images to help the face detection model, also just to be sure make the controlnet images bigger than 512px 🤗
Don't forget to enable your Controlnet module which very subtlety happens here. 4:34
And I was just wondering why my images didn't look anything like me. 😂
Other than that, great tutorial !! Thanks! 👍
Glad you found it! :D
Thanks! I was wondering why my images didn't look the same😂
I've been playing with it and the main issues is the lora can drastically change your image even if you set it to lower values. Reactor isn't dead yet. 😊
Yeah, some loras work great but others effectively destroy the person. That's what you get with community based content though, it's usually hit or miss in terms of quality.
Thanks Sebastian, I used to do this with other methods, but this way is much cleaner. 🤘
really?
Hey this is a great video, thanks for sharing. Can you advised on how to make the backgrounds clear and not blurry as every image generated leaves the background out of focus.
Great tutorial. Works really well with SD1.5, but with SDXL - bodies and faces seem way off and very unrealistic (despite negatives), unless it's close-up portrait facing the camera. Instant-ID gives me similar results with SDXL, not great for anything but portraits.
Best settings:
LORA: 0.65
Guidance controlnetNet: 1.25
Keep Start 0 and end 1.0
Interesting, thanks!
You're definitely on to something concerning lowering the weight of the lora. The higher the weight, the more it wants to zoom whatever picture you're generating into a portrait. If you lower the weight, it zooms out so there's more going on. (aka me riding a unicorn and wielding a sword riding into battle)
Great tutorial, 100% spot on. I'm new to SD. Is there a way to save the "outcome" of the face, so you can use it later?
We need an answer, good sir.
The subtitles are really helpful!
Thank you for the feedback! Happy to hear it
If I met you in a dark alley at night it would be a pun-ishing time for both. 🙂
I stumbled onto this through trial and error but was missing the settings to get good results. Cheers.
The faceid portrait model is even better.
Hey! I'd love to know in what ways you find it better. Been playing with the latest 1.1 and not really seeing much (and limited to 1.5)
@@sebastiankamph I get much better likeness, and it's super good for mixing faces. Realisticvision5, ddpm, 50-60 steps, 4-5 cfg, weight at 1.0, 0-1.0. I think that the input images are very important. I use five, and they're all 768x768 taken from almost the same angle and distance, with a little bit of distance around the head. Using comfy.
I guess this all might depend on the subject. I haven't tried it with many different faces. I never had much luck with sdxl anyway :\
Is this a Controle Net thing? Can you share the link?
@@lucianodaluz5414 Or just look at Sebastian's link in the description.
May I ask what you mean by this?
Great tutorial as always.
I laughed at 8:46 😂😂
It´s work, thank you ¡¡¡ from Spain
Unfortunately everybody who shows face swaps only shows them with portrait images as the target. It would be much more interesting to know whether this also works sufficiently with scenes where the environment is more important, and the person is shown with a full body, meaning the target face is significantly smaller ... !?
its not working when i use my model face in multiple or even in single
Same here this method doesn't work with multiple images
Can ForgeUI do multi-input? I'm trying to use batch folder and batch upload but I think they only use the first image.
Awesome video once again. Here, have my sub.
Absolutely love this IPAdapter thing. I only roughly knew what it does so far. That's a a game changer.
I laughed for 3 minutes 🤣🤣🤣
nice tutorial btw
Hey! Thanks for this great guide! Unfortunately it doesn't work for me, just results in images of random people.
Did you find a solution? I get the same. I also tried using with and without ipadapter and the results are the same. I think maybe this has recently broken? I only recently downloaded all the latest versions of everything but this ipadapter face thing doesn't seem to work.
same here, would love to hear if anyone found a solution to this
Very strange work of the controlnet, the IP adapter V2 worked a couple of times and then began to produce completely worthless results completely different from the sample photos that I uploaded.
Better than reactor? becausee that one is really good. And also much easier to setup
I think I watch your videos more for the dad jokes so I can annoy my kids more effectively LOL
How does this compare to reactor? that's the best one i've found, but of course always looking for something that might be better
is "batch upload" the same thing as "multi input"?
wonder if this would work with SSD-1B?
Thank you
Finally, a really well explained Tutorial that works perfectly!
Glad to hear that! :)
First, thanks for the video, your channel is really helpful in keeping up to date with all changes, and you explain everything very well.
I've been using Reactor for a few months to generate a consistent face for a model, and I really like it. Decided to give this a try, and I was expecting to get similar results with this method (but somehow better and/or more consistent), however that's not what happened.
The face swaps I'm getting with IpAdapter are very different from the results I get with Reactor, it seems like another person, which makes this method not very useful for me (at least for this particular project, which already has a strongly defined face). Is this normal? Any tips to improve? For now I will keep using Reactor, since it's giving me better results.
thanks for the video, i find it quite hard to get into SD. There are so many options, i have the feeling its overcomplicated. We need more simple ui.
I think the pre-processors name changed to InsightFace and CLIP ViT on newest Forge.
I'm using forge and I don't have the face id plus preprocessor ... where can I find that and where do I put it?
did you find the solution
@@zoro_uchiha777 With Forge the default IP-Adapter preprocessor called "InsightFace+CLIP-H (IPAdapter)" works just fine. Just follow what he does with everything else and it will work. Also, you might not need to use the LoRA, but if you do I find that switching the number at the end from 1 to 0.4 gives better results.
I have forge UI and i dont know how to get this working. preprocessors for for Face Id dont appear
did you find any solution?
i got it all installed im just trying to get it to look like my photo but it comes out nothing like them XD so i gotta play around see what i did wrong , great guide tho it looks great even tho its not me XD
did you find the solution?
Got error unable to import dependency ,how ?
any tips for choosing the most adequate samples images? resolution? background? portrait angles? number of samples?
since it works with SDXL will it works with ponyXL model? or IllustriousXL model?
Why Starting Control Step and Ending Control Step sometimes doesn't affect output render at all. And if I set 0.2 - 0.8 as in your example, I have zero resemblance on ref images 😂😂😂 And should I change extension of .bin to .pth? Or atm it doesn't matter? Thanks
Great turtorial! but i have one problem. My stable diffusion generates 2 or more people 50% of the time. Is this a common problem?
you please show us how to merge sdxl train model with other sdxl model... like we do with sd 1.5 checkpoint merger where in A we use to put our train model on B we use to put Photon or dreamshaper and on C pruned checkpoint, and as VAE 56000 or 84000 so
we want to see tutorial on SDXL
Is there a specific version of dream booth everyone uses? Mine looks very different compared to some.
Is there a way we can generate images with the same face and clothing but in different pose?
which is better ReActor or IPadapter
i really like your voice tone, i have feel that i can slowly drink coffee and really focus on job. Other youtubers just spitting and screeming on you just to make the video "more intresting". Thanks.
I appreciate that!
Can you do this on COMFY UI?
Maybe with facefusion or roop can help to reach that last step of similarity
for some reason I am having difficulty adding the upgraded version of controlnet to ForgeUI, anyone else having this issue? - may be conflicting due to already build in controlnet?
thanks for your tutorial! but what can I do wehen I see this: AttributeError: 'NoneType' object has no attribute 'mode'?
¿Cómo llegamos a la página que muestras cuando empiezas el video?
how to fix preprocessor not showing ip adaptor clip sd15 or any other models related to faceid....?
Man, controlnet is not working at all for me. I have installed the extensions and all of its models and uploaded them and everything but stablefusion is completely ignoring it. Yes I pressed enable and did everything from the videos. Please any advice or a discord server or anyone to whom I can share screen so he can maybe see where the problem is?
I had the same case. But I resolved it after changing/playing with the parameters. The given parameters for ip-adapter-faceid- plusv2-sdxl parameters didn't work with me. And I went for ip-adapter-faceid- plusv2-sd15 paramaters. It didn't work with the parameters Sebastian gave in the video. It started working after I changed the CFG from 1.5 to 5 and the Sampling to 20. And only then my SD started recognizing Controlnet. I was able revert it to what Sebastian suggested in the video once it started working. My best results came with: ip-adapter-faceid- plusv2-sd15 - Sampling 30 - CFG 8 - Control Weight 1.0
Nice tutorial Sebastian. But why it doesn't work with batch images? Only with single image input.
Hello, thank you for the tutorial,
i am using forge, i am unable to find the proper pre processor in my list or online, i have InsightFace+CLIP-H (IPAdapter) only. it isnt the same as you
It works well! Thank u :)
You're welcome!
Drives me crazy, I am trying and i keep getting an error Exception: Insightface: No face found in image. its like control net doesnt really do anything and i made sure its enabled, uploaded 6 photos 1000x1000 very clear face in them... any ideas?
Where's your baseball cap from?
I need a new one 😊
What is the full name of the extension? Typing ControlNet gives me a billion results.
When i use it on stable xl it says to me a strange error, also if i put sd xl vae and all to xl
Are there any requirements for the quality or type of image used as an input? Also, is the amount of input images important? I can't get any good results. At times, you can't even recognize anything at all.
Do you know if this works with Fooocus?
How i can get the "DPM++ 2M Karras" Sampler (or other Sampler)?
I can't select it on my newly installed Stable diffusion 1.5
Do we need any trigger word in the prompt for the lora we have added
No, it's weighted in when you add it like I did.
@@sebastiankamph gotcha, thanks for replying!
Did you trained your own model?
How do you create a consistent body? cause I know how to do the face swap but I can't seem to get a consistent body though.
no me impresionaba tanto, pero en la segunda generacion :O its awesome
I'm guessing this is not going to help for groups? I'm trying to recreate family pictures of us on the moon and more haha, does this work with the fork that uses directml for AMD GPUs? it does not seem to be taking my pictures in consideration
Hello Sebastian. Thank you for sharing. I tried with Reactor and the ipadapter on SD1.5 txt2img under the same prompt (with lora when using Controlnet). When I use the ipadapter, the photo becomes so blurred like a glare covered. What parameters should I adjust?
did you get resembling faces? I am trying to use his methods to resemble the face however they are not at all close and seem generic
if my root controlnet folder is "models/Controlnet preprocessor" and "models/Controlnet" what folder im need to use for import preprocessor? im put safetensor file in both of them and models/lora but custom preprocessor don't adding, custom models adding well
do you have the link for the said app?
how do i can change the ControlNet Integrade for this ControlNetv1.1.440?
How does this work in ImageToImage or Inpaint ?
Yeah but what about the clothing,accesories...?
Can this somehow work with Fooocus?
I Followed the same steps but its not working as shown in the video the resemblence is not at all matching
Can used it at Intel graphics?
How do you use ipadapter with FORGE? I tried on FORGE but the result not as good as original auto1111
What happens when you turn preprocessor to 1024 instead of 512 when using sdxl checkpoint ?
Just use pixel perfect instead
Among all the other similar face-transfer systems (Roop, ReActor, InstantID) this one seems to give me by far the worst results. The likeness is really bad and the overall image quality is terrible.
Did you try Faceswap lab? It is the best I've seen so far.
is there instant_ID for 1.5 yet?
No@@TRoJMelencio
There is no reason to use this it gives tragic results, I try all the time to get some good effect with SDXL and the same poor results, all the time quality problems.
@@TRoJMelencio no
im trying to use it with force but it wont see the bin files
reactor is definitely easier and more relevant for beginners
Reactor is sadly low resolution and only works on photorealism well.
Has this been disabled? I had this working down to perfection for my face but took 4 weeks away and now it doesn't work, it just renders random guys faces even when I use all my same pics
did you find a solution? I also get just random faces, not the face of the images I upload
@@popcornviews i have but its hit and miss, it works better with between 2 and 4 images of the face. Also found out I had some more updates to do
But how to keep unique special elements like birth marks etc?
Tough one. I would probably retouch them in manually unless you want to specifically train a model.
So what is the best lora?
What's the difference between this method and the old one where we use the ReActor extension? Are there any benefits to using IPAdapter?
Reactor with its insightface model is best at photorealism, whereas ipadapter can do any style.
thnx @@sebastiankamph you are right!
how you get all foocus styles in A 1111.
it's not better ReActor?
someone help me with this solution please, when I choose the multiple option this is what appears '' loadsave.cpp:1121: error: (-215:Assertion failed) !image.empty() in function 'cv::imencode' '''
5:38 Could you please make an video on styles, your patrion page is paid so it would be helpfull for free users, Thanks for the tutorial Following you closely 👏😁
I'm working on ComfyUI (and SDXL) since weeks. I think i installed ReActor node just before we started to read eveywhere about ipadapter.... so i was wondering : About face swapping... Is ipadapter better than ReActor ? or is it mostly the same ?
Thanks if someone here tryed them both and could answer me ! I'd like to be sure first before i decide to uninstall ReActor and choose instead the other one.
And thank you for your new video Sebastian !
One is not better than the other, it's different and has different usecases. Reactor with the insightface model is mostly used for realism.
Ok; Good to know.
I've only just started using it and I had the impression that ReActor had trouble matching the chosen face to the expression of the target face. But I'll have to do more tests to verify this.
Thanks a lot for your reply!@@sebastiankamph