Matteo, I think I speak for many of us in the community: Your content f'n rocks. Thank you so much for taking the time and explaining all the nitty gritty details of IPAdapter and how to use it to create consistent results. Please keep doing what you're doing.
I'd also like to give voice to the many people saying your content is amazing. It truly is. The rocket ship from learner to intermediate and beyond. You explain the "why", not just show an example of "how" with a click-bait title and an enticing misleading thumbnail. Thank you. Please don't stop making this content, whatever the platform you choose to publish it on. It is fantastic.
I am at the start of your video, it feels like a sportevent :'). ComfyUI can be quite daunting sometimes, but this... this is entertainment. Great content and many thanks.
Thank you for this video👍👍👍, again very in-depth tutorial in simple language ❤❤. I have 1 request - Please upload in-depth prompting tutorial video for comfyUI
thanks! prompting is such a complex topic and really depends a lot on the kind of image you are trying to do... but it would make a very interesting video for sure.
Something I’ve done for upscaling with 1.5 models is to go back to the original model and separately add the IPAdapter(s) and Lora with full 100% weights for the upscaler-so lower weights on the model going into the first ksampler so it’s not too rigid, but high weights on the upscale to pull it closer to the sample image.
yes that's a good strategy especially when you have multiple people or when the image is not a portrait. Always better to make a rough composition first and refine in a second pass
Matteo - one thing I'd love to know, is how the base checkpoints determine the faces (without ipadapter). What I mean, is almost every checkpoint I have tried has a "base" female or male face. Even when changing the nationality or race, the face characteristics stay similar to that base face. It's only when you start adding names or other positive prompts can you change the face into something more unique. I understand the models were trained on common sets of portraits, but surely there were many thousands of faces, if not millions. Ideally in my workflow I want to create a never before seen face, then use a workflow such as this to put the character into different scenes, poses. However I just can't seem to to randomise the face enough without putting in lots of ugly looking prompts.
Hi Mateo 👋 About changing the Hair. Using Unsampler with Controlnet would change the colour with keeping the same hair exactly , when writing in a prompt another colour. Right? instead of masking or inpainting
Loved the video! Given your experience in this space, are you aware of any tech that will improve upon inswapper/insightface etc to enable swapping of faces or generation of custom faces at side or angles other than front-on or nearly front-on? this seems to be largest constrain of the current tech. Thx!
Amazing video Thank you. For me the best result is FaceID v2 SDXL+Upscale but sadly that truns my comfyui into a lowvram mode, i guess there was a deleted FaceID sdxl, it ran fine and results were really good.
From what I saw SD15 FaceIDv2 + 2x upscale works incredibly well (upscale as simple resample with 0.4 denoise). Adding any other face model makes face blurry. With FaceIDv2 its incredibly sharp and keeps level of "likeness" comparable to best lora.
To say that it's at least a month that I'm implementing two faceIDs in the same image, didn't know that it's been frequently asked on discord, you have clearly explained how to do it in other videos after all. Anyway, thanks A LOT for these benchmarks, these really are a time saver. Guess I'll stick to faceIDv2 and reactor faceswap in the end for total consistency of my characters. The faceID is still useful for hair and face shape.
Im trying to follow along with the new pul ID demo but am unable to see the DLib Model for Face Analysis Module. Only the insigt face model shows up. Where do we place the Dlib Models as I have downloaded them but they are not showing up in the load model. Thanks
Did you try the pony model or any pony modles for face recognition in your tests.? If not I'd like to see you do that next time around to compare how that model fares in face recognition.
Hey, your tutorials are awesome. How can I use the FaceID Portrait workflow into an existing image? e.g. just a simple faceswap of an existing image, maybe using Inpainting or something? What do you recommend, is there an existing workflow where I can set the face images, and the image to be swapped?
Thank you so much! How would bbox face and i2i work? Am I outpainting the original face into a new image, or something else? Also, is FaceID the highest fidelity for photorealism, or InstantID can also work with photorealism? Thanks!!@@latentvision
Great benchmark, thanks a lot. You said that Instant ID and Face ID portrait rely on insightface, thus you need to buy a license for commercial use. Does this also apply to using Apply ip adapter face ID then as it is also using insightface? Thanks again.
@@DanielPartzsch if you use any FaceID model, then yes, those make use of insightface. You should check their license. If you don't use any of the FaceID models, then you are fine.
Is it possible to apply makeup on an input image without using a checkpoint? I've been working on this for a long time but I'm not sure if my efforts are in vain.
Amazing video, one question , what is the difference between weight and weight_v2? @5.43 , he mentioned these weights, but I am unable to understand, can someone help?
Hi and thank you for your splendid video. Does anyone know why i have this error please? --- Error occurred when executing InsightFaceLoader: No module named 'insightface.app' --- I've proceeded with the install inside ComfyUI via Git Url and everything was correct but i cannot test InsightFace. Thank you in advance. Olivier
such good information here , thanks for the research ! one thing that might be nice is establishing a baseline , using actual photos of the same person in different scenarios to see how low the difference tends to be with realife variation
Best videos on TH-cam with respects to Comfy! A few videos that I think would help others (including myself) is inpainting with SDXL and photo bashing on SDXL. I find inpainting with SDXL inconsistent. It would be nice to see these topics covered in images that are geared towards landscapes and not just on people. Such as adding objects to a scene where prompting alone can't handle it because you are adding several objects. Just my two cents and your channel is just awesome!
Matteo, can you confirm that this only works when the positive conditioning is plugged directly to the sampler, i.e.: it breaks when using concat/combine with another positive prompt?
Matteo, I think I speak for many of us in the community: Your content f'n rocks. Thank you so much for taking the time and explaining all the nitty gritty details of IPAdapter and how to use it to create consistent results. Please keep doing what you're doing.
You're speaking for me.
And me!
You don't speak for me, these tutorials are painful to watch lol.
Yes yes yes!!
I wholeheartedly agree. It's been a long time since I've not been so enthusiastic about TH-cam videos. You ROCK Matteo !
I'd also like to give voice to the many people saying your content is amazing. It truly is. The rocket ship from learner to intermediate and beyond. You explain the "why", not just show an example of "how" with a click-bait title and an enticing misleading thumbnail. Thank you. Please don't stop making this content, whatever the platform you choose to publish it on. It is fantastic.
That is some high quality spaghetti right there. Compliments to the chef!
😅
Your are new member of GOAT ComfyUi workflow Creator
Sto workflow Sta a pija consapevolezza di se stesso co tutte quelle sinapsi
Brilliant stuff. Thanks so much for taking the time to produce these tutorials Matt3o 🙂
Community MVP! 🙏🏻
Your videos always give me some information I don't think I need to know before. Thank you for your hard work. 😊
Another amazing video Matteo, direct, to the point and with great didactics, God bless you!
I am at the start of your video, it feels like a sportevent :'). ComfyUI can be quite daunting sometimes, but this... this is entertainment. Great content and many thanks.
Many thanks for your videos friend, they're awesome!
Looking forward for the 2 people video please!
wow, what a great scientific approach. Love your style and obsession with this. i respect you
Hands down the best content of comfiui in my view
I've been able to make some amazing progress by following along with your videos. Thank you very much!
Thank you for always providing great lectures! I am learning a lot from the detailed and friendly explanations. Thank you again.
Thanks for taking the time to make theses tutorials. Very hard stuff to understand but I'll keep trying :)
Best tutorial and best explaination ever, very great video, also very inspiring.
Thank you for this video👍👍👍, again very in-depth tutorial in simple language ❤❤. I have 1 request - Please upload in-depth prompting tutorial video for comfyUI
+1
thanks!
prompting is such a complex topic and really depends a lot on the kind of image you are trying to do... but it would make a very interesting video for sure.
@@latentvision i will wait for your video on this topic.❤️
that workflow looks crazy!
These videos are golden. Please keep going.
Another HQ video from Mateo about Stable Diffusion? Yes, please! 😊
Awesome as always!!! Thank you Matteo!
As always, excellent and inspiring content. Thank you
Something I’ve done for upscaling with 1.5 models is to go back to the original model and separately add the IPAdapter(s) and Lora with full 100% weights for the upscaler-so lower weights on the model going into the first ksampler so it’s not too rigid, but high weights on the upscale to pull it closer to the sample image.
yes that's a good strategy especially when you have multiple people or when the image is not a portrait. Always better to make a rough composition first and refine in a second pass
Matteo, thank you so damn much for your outstanding work. I have been on a deep dive on all your work, clip models, and diffusers
Matteo - one thing I'd love to know, is how the base checkpoints determine the faces (without ipadapter). What I mean, is almost every checkpoint I have tried has a "base" female or male face. Even when changing the nationality or race, the face characteristics stay similar to that base face. It's only when you start adding names or other positive prompts can you change the face into something more unique. I understand the models were trained on common sets of portraits, but surely there were many thousands of faces, if not millions. Ideally in my workflow I want to create a never before seen face, then use a workflow such as this to put the character into different scenes, poses. However I just can't seem to to randomise the face enough without putting in lots of ugly looking prompts.
Sempre video top!
Thank you🙏, very useful analytics and a tremendous amount of work done👷.
like magic
Great video as always man !😀
These videos are great, thank you!
Awesome!!! So helpful. Thanks!
thank you very much Mateo
Hi Mateo 👋
About changing the Hair.
Using Unsampler with Controlnet would change the colour with keeping the same hair exactly , when writing in a prompt another colour. Right?
instead of masking or inpainting
Loved the video! Given your experience in this space, are you aware of any tech that will improve upon inswapper/insightface etc to enable swapping of faces or generation of custom faces at side or angles other than front-on or nearly front-on? this seems to be largest constrain of the current tech. Thx!
Crazy good inspiration
I love how the name of the models overlap the interface and become completely unintelligible. Good stuff. Remarkable UI design.
Wow, another great video Matteo! I would love to see your take on the new PhotoMaker and InstantID SDXL models.Peace!
I'm trying PhotoMaker right now... not impressed at the moment. InstantID has potential.
amazing! thank you 🙏🙏🙏
Great video, learned a lot!
Amazing video Thank you.
For me the best result is FaceID v2 SDXL+Upscale but sadly that truns my comfyui into a lowvram mode, i guess there was a deleted FaceID sdxl, it ran fine and results were really good.
Have I finally found the holy grail of stable diffusion youtube
From what I saw SD15 FaceIDv2 + 2x upscale works incredibly well (upscale as simple resample with 0.4 denoise). Adding any other face model makes face blurry. With FaceIDv2 its incredibly sharp and keeps level of "likeness" comparable to best lora.
Can you go a bit further on what you mean by upscale as simple resample? Thank you!
I think it's the upscale mode you can choose
Like in default it's on (near exact)
Great work!
To say that it's at least a month that I'm implementing two faceIDs in the same image, didn't know that it's been frequently asked on discord, you have clearly explained how to do it in other videos after all. Anyway, thanks A LOT for these benchmarks, these really are a time saver. Guess I'll stick to faceIDv2 and reactor faceswap in the end for total consistency of my characters. The faceID is still useful for hair and face shape.
yeah we already talked about that... you know, on internet something said 1 month ago doesn't matter anymore 😄
Im trying to follow along with the new pul ID demo but am unable to see the DLib Model for Face Analysis Module. Only the insigt face model shows up. Where do we place the Dlib Models as I have downloaded them but they are not showing up in the load model. Thanks
Did you try the pony model or any pony modles for face recognition in your tests.? If not I'd like to see you do that next time around to compare how that model fares in face recognition.
pony models are not compatible in general
@@latentvision Okay makes sense.
Hey, your tutorials are awesome. How can I use the FaceID Portrait workflow into an existing image? e.g. just a simple faceswap of an existing image, maybe using Inpainting or something? What do you recommend, is there an existing workflow where I can set the face images, and the image to be swapped?
you can do inpainting sure, or you can bbox the face and do a simple image to image
Thank you so much! How would bbox face and i2i work? Am I outpainting the original face into a new image, or something else?
Also, is FaceID the highest fidelity for photorealism, or InstantID can also work with photorealism?
Thanks!!@@latentvision
Im new to all this, its there any way to make the head tilt or show a different facial expression?
BRAVO 👏 🙌 🎉
Great benchmark, thanks a lot. You said that Instant ID and Face ID portrait rely on insightface, thus you need to buy a license for commercial use. Does this also apply to using Apply ip adapter face ID then as it is also using insightface? Thanks again.
it's not the node itself, it's the insightface model. If you don't use the model for the image generation, no problem
@@latentvision thanks. But for my understanding and to double check: doesn't the node use the model under the hood?
@@DanielPartzsch if you use any FaceID model, then yes, those make use of insightface. You should check their license. If you don't use any of the FaceID models, then you are fine.
sry if it is a dumb question, but how they can identify if somebody used their insightface by looking at the production image?
Is it possible to apply makeup on an input image without using a checkpoint? I've been working on this for a long time but I'm not sure if my efforts are in vain.
How do I calculate the embedding differences of my workflows? I want to see if I recreate the same individual at the end. Thanks!
I'll add a node for that
Hi,
@Latent Vision How do you use multiple attention mask when you have multiple reference photos ?
there's a video about attention masking!
@@latentvision my bad i thought the mask was for what is considered on the input! But it is for what is influence on the output ! Sorry
Amazing video, one question , what is the difference between weight and weight_v2? @5.43 , he mentioned these weights, but I am unable to understand, can someone help?
weight is the global weight. weight_v2 is used for the clip vision embeds. I suggest a value between 1.5 and 2
So for commercial use only Plus Face and Full Face are allowed?
Thanks!
Is PhotoMaker is different separate model?
yeah completely different thing.
Hi and thank you for your splendid video.
Does anyone know why i have this error please?
---
Error occurred when executing InsightFaceLoader:
No module named 'insightface.app'
---
I've proceeded with the install inside ComfyUI via Git Url and everything was correct but i cannot test InsightFace.
Thank you in advance.
Olivier
such good information here , thanks for the research ! one thing that might be nice is establishing a baseline , using actual photos of the same person in different scenarios to see how low the difference tends to be with realife variation
It’s been 9 months, how are these techniques compare to newer ones like PulID please? Thank you for sharing.
in most recent videos I talked about Pulid and others
Awesome demo Mateo !!!
Respect.
#NeuraLunk
Awesome title
IKR?! It basically wrote by itself!
Best videos on TH-cam with respects to Comfy! A few videos that I think would help others (including myself) is inpainting with SDXL and photo bashing on SDXL. I find inpainting with SDXL inconsistent. It would be nice to see these topics covered in images that are geared towards landscapes and not just on people. Such as adding objects to a scene where prompting alone can't handle it because you are adding several objects. Just my two cents and your channel is just awesome!
Ferniclestix has some great tutorials about those! Him and matt3o have some of the best comfy tutorials
thanks man this helped
위와같이 모델 헤어도 일률적으로 비슷해질까요?
Matteo, can you confirm that this only works when the positive conditioning is plugged directly to the sampler, i.e.: it breaks when using concat/combine with another positive prompt?
no, it should always work. Of course more conditionings will pollute the composition
@@latentvision You're right, I guess that was the case. Had to bump up the prompt for it to take effect again. Thanks 👍
treasure
Thanks
❤
I'm struggling when I want a specific eye color, especially the abnormal colors. Any tips?
well inpainting is the easiest solution
try adding heterochromia in the prompt
Did anyone figure out how to run insightface with cuda instead of cpu?
leave insightface in the CPU, you don't need to bother the GPU for feature extraction
Don't get me wrong but... can I get your aunties number? I'll help her carry the groceries!
Emotions still looks creepy, lol :)
for unrealistic character (anime) ,🥲 its bad
you need more work, but it works for that too