FaceID: new IPAdapter model
ฝัง
- เผยแพร่เมื่อ 27 ก.ค. 2024
- New FaceID model released! Time to see how it works and how it performs.
Workflows: f.latent.vision/download/face...
Motion controlnet: huggingface.co/crishhh/animat...
IPAdapter Plus Extension: github.com/cubiq/ComfyUI_IPAd...
Discord server: / discord
00:00 Intro
01:14 Basic FaceID
04:23 Enhancing FaceID
08:37 Comparing all face models
10:54 Reality Check
11:47 StableZero123 with FaceID
14:10 Animated StableZero123
17:48 Outro
** Background music **
- "Part A" by Alexander Nakarada (www.serpentsoundstudios.com) Licensed under Creative Commons BY Attribution 4.0 License
- "Manace" Synthwave by Karl Casey @ White Bat Audio (whitebataudio.com/)
- "Last Stop" Synthwave by Karl Casey @ White Bat Audio
## IMPORTANT ##
Since I published this video things changed a little! A new FaceID Plus model was released and I had to change the Apply IPAdapter node. The old workflows won't work anymore but I updated the workflows in the ZIP file linked in the video description. f.latent.vision/download/faceid.zip
Sorry guys but this is a very fast moving world! The overall idea explained in the video though is still valid.
V2 of FaceID-Plus coming soon! Quote: "You can adjust the weight of the face structure to get different generation!"
Thank you for the update. We love your content.
Thanks for the heads up. Really appreciated. Thanks for your amazing work.
v2 should work with the current codebase@@joeduffy52
update: it turns out it doesn't. I'll push a new version shortly and also post a new video about it
Please never stop making these matteo, I can't tell you how much this helps us all
Great job, amazing how much value you always manage to pack into these videos!
Your videos are the most concentrated, no water tutorials on these topics! Thank you!
It is truly amazing how well this works at capturing the look of someone and projecting it into a whole new scene. Your time and effort to explain these tools is greatly appreciated!
👏Spectacular! 🙏 Thank you for always taking such great care of the community with your content. 👊
omg Matteo, this is pure magic. Your tutorials are from another world.
Applausi per Matteo. 🙏🙏🙏🎆🎆🎆
Thanks.
I read you msg just a few days ago about this on the dev thread in the matrix network and you already published the node. Incredible work.
many thanks for sharing, am getting some great results and experimenting with the new nodes/workflows!
Fantastic! Thats my best Christmas gift this year! Thank you!
Dangerously underrated channel.
it's an advanced niche
@@sherpya For now. But it just means we got to support the community.
Super fantastic source of knowledge! Amazing pace of vide9 speed aswell, greetings from Sunny Greece :)
Oh my, the results are darn near perfect, good job
Thanks and happy holidays to you too!
Thank you great video. very informative. appreciate the work you and the team put into this custom node. really appreciate it.
I had no idea you are the ComfyUI developer. All this time, I've been saying your videos are the best tutorials, and now it makes sense haha.
Thanks for all the work Matteo!
I salute you for your custom node in Comfy! Thank you!!!!🙏🏽
every video its a great surprise full of knowledge
Awesome guide as usual. Keep up the great work! 😊
The adapter we need ! Merry Christmas !
brilliant, v2 seems even better! :) hope u had great xmas Matteo and new year! :)
buon natale my man! your videos are always utter s tier!
Glad to see you working with Hu Ye and laksjdjf! The IPAdapter holy trinity haha.
Thanks for providing the workflow. New sub here👍
Wow ...just wow, thankyou to all you wonderful people for making this
I love that your Tutorials always are very helpful and straight forward. D
First you explain what you will show us today.. Than it Beginns very Basic with good result.. And than the no des become more and more with just more complex result.. And with good and fsdt explaination.. Step by step.. 😅
😎✌🏻💪🏻
Wish you great holidays and or have a good christmas time 🎉❤
hey Matteo :) thank you for another great video :)
as always great content
You are a maestro of ComfyUI!
your videos are the best, please upload more often!
damn, I'm not a machine 😊😊😊
Amazing job once again!
God, this is the best Christmas present ever!
We really appreciate it.Thank you🙏.
I was already getting some great results with face full, even on 3D cartoony models. This is just amazing.
Why I am not getting good results in img to img😢
amazing video! thank you for sharing
Great work Matteo! 🙌🙌
Awesome tools, grazie Matteo!
U r a legend ! Keep going
Another brilliant tutorial. 👌
WOW! Thank you for sharing.
As always great content. One of the few channels where the author did the work himself rather than using someone else's. All best 🙂
This is pretty easy to follow!
My god this is mesmerizing!!
I needed just that, amazing
incredible... thankyuu!!!!
Wow wow wow. Amazing video 😊😊
Thanks for the video you saved me a lot of thinkering :)
always happy to help a fellow tinkerer
I love the consistency of the character through the different versions of the woman with red hair
Amazing!
gonna try it first in the morning
incredible
It's so exciting
i get it. thank you very much!
awesome!
It improves a lot if you throw in a 0.1 weight sd15adapter on a full body picture and the faceid and full-face on a face picture. After a few dozens x/y plots to finetune the other weights for best starting points this set-up competes pretty well with trained lora and blows away anything I could do with only 1-2 subject pictures.
Where do you throw in the full body picture? Do you use a third Ipadapter or what? Do you mind sharing a workflow for this? Cheers!
Grazie e Auguri :)
This model is released exclusively for research purposes and is not intended for commercial use.
thank you.
can someone help me with installing insightface ?
Although I don't like InsightFace's team people, but thank you for your update for IPADAPTER ❤️😊
you an me both... InsightFace is hot mess :P
@@latentvision :D hehe, becareful they are gonna claim copyright video again.
Anyway, thanks for your inspiration. I will keep improving the workflow.😉👍
thanks for the update - will you do a new video showing the differences with the updated faceid plus models?
Bro you keep coming up with amazing stuffs!
yes the sharpening that I've applied is in the pixel space, close to what you'd do in photoshop
@@latentvision Thanks!
Sending ❤ from south korea
Sending ❤ from Italy 😊
Amazing can't wait to see it on XL !! To be sure it can replace the reactor face swap ?
Well... it's very good, but as you can see it's not always perfect.
Haven't finished the video yet, already learned a new skill: Image Crop, like magic! Please keep making videos like this, we need to improve our skills to advance. Too many entrance-level tutorials. By the way, when using FaceID, always an error "FaceID must be provided for FaceID models." I googled and found many people have this issue. I am using Mac M1. Any tips? Thanks a lot! oh, forgot to mention, I subscribed right away!
you need to check your insightface installation and use the apply faceid node
Hi Mato, thanks for your response. I did use "Apply IPAdepter faceID" node. I downloaded the ipadapter_plus folder from your repository and replaced the one in custom/nodes folder. But still shows "No Module named InsightFace". Could you please advise how can I check InsightFace installation further? Thanks! @@latentvision
ooooooo anything with insightface is fking awesome. hold on a minute theres a image crop tool. Someone needs to do a video on the top 10 most useful situational nodes.
Thanks for all you do! Do you remember the checkpoint used for the thumbnail image?
Dreamshaper (with FaceID)
Great timing! I know it only just dropped but are we getting an sdxl faceid?
I'm sure we will soon enough
Thx man, it Even works with a1111 except for the portrait model. One question, how do you use the LORA’s. Are they necessary? Seems to work fine without….?
Hello there, thanks a lot for sharing! Im only having an issue when i try to apply the "IPAdapter FaceID" node i have this error ; ERROR:root: - Return type mismatch between linked nodes: insightface, CLIP_VISION != INSIGHTFACE any idea what i can do wrong by chance?
👀
cool stuff , does it also work on drawn faces in scribble or inked comic style ? edit: 10.45 , it works perfect it seems.
Since the title for the motion controlnet checkpoint is "ad/motion". I do not know if it is the "basic", "less motion" or "more motion" variant. Which did you use? 16:40
basic
Oh my goodness... i nearly understood nothing... well opened some times the comfy ui user interface... and its great for specialists.. nothing for Architects at the moment... much to complicated. BUT! This showes realy the edge of possibilities. Now just wait 1 or 2 years, put a much simpler interface in front of those nodes and a large language model to comunicate with the backend.. something like the copilots of microsoft, but for stuff like this... mindeblowing..
yes I agree comfy is incredibly powerful but it would need a rewrite in the front end
How can I resolve an error with the insight face loader node? :(
GJ again :-) ... is it comfyui only or does it work with a1111 too?
it needs a special configuration, I don't think A1111 is supported yet
@@latentvision will that happen an time soon?
I'm sure they will fix it soon@@freneticfilms7220
Hey Matteo, is it possible to take the input image with multiple faces (lets say 2) then after the image generated, the two faces are put back in.?
yes of course with a segmentator
Thanks for the videos! How does rescale CFG work and why would you use it rather than lowering the CFG number?
you get the benefit of a high CFG and the result of a low CFG... well almost :)
Hi, thanks for the great tutorial. Not sure if someone has asked this question, how do I install insight face? I cannot find any tutorial out there that can help me :(
Anyone help me. Error occurred when executing 'Apply IPAdapter' node:
AttributeError: 'ClipVisionModel' object has no attribute 'get'
👋
This is an incredible model! Is there a cloud service available where I can run it easily, or even a Google Colab that makes it possible? Congrats guys, incredible job!
people are using it on colab so I'm sure it's possible. I don't have the specifics as I only used it locally
@@latentvisionPerfect! Thanks! I'm gonna try to find a Google Colab on the internet! :) Thank you!
Is there any feasible way to change the outfit of the character in the video by the use of the IPAdapter and ComfyUI? Best Regards and Good Job as always~
yeah that is feasible, it takes quite some tinkering... I'm thinking of a future video about it
Thanks for the tutorial! I need to ask a question, maybe it's because I didn't install insightface in ComfyUI, when I tried to load your workflow, in IPAdapter Model Loader node, I could not find the ip-adapter-faceid_sd15.bin, which I already copied into model folder with others. Only this one I could not pick. And if it's because insightface installation, is there any guide for it? I searched for it, but not found. Thanks!
For me, placed ip-adapter-faceid_sd15.bin in the comfyui/custom_nodes/comfyui_ip_adapter_plus/models folder :)
I did the same, this why other models are able to be picked, but I'm not sure why this new one not working the same.@@PiakBot
stop comfy and re-run, it should show up if you refresh the page
Weird thing is, I already restart multiple times and re-download the bin file again, still not available in the list, and only this FaceID one is missing, all previous models are available. This was why I suspected it's related to insightface or not.@@latentvision
Thank you for this! I was able to finally get consistent faces with my characters! But for the animation part, I'm struggling to find the "ad/motion.ckpt" model for controlnet.. where could I find that?
thanks! the CN is linked in the video description!
@@latentvisionOf course! I dismissed it because of the different name XD thanks!
Hi i have a error, please help me:
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Thank you!
Can you use multiple images from the same character as input?
not at this moment, but it would be easy to do. I have to check if it makes a difference
It was not possible to start, it shows the following error (I already completely reinstalled Comfi, I thought there was something wrong with it, but it didn’t help)
Error(s) in loading state_dict for MLPProjModel:
size mismatch for proj.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1664, 1664]).
size mismatch for proj.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1664]).
size mismatch for proj.2.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
I'm really loving these tutorials but I find the music to be a bit distracting, it would be nice to have something a bit more ambient instead.
Why not svd instead of animateDiff, or maybe for second pass? )
SVD can't be controlled in any way, overall it is pretty bad honestly unless you use it for very specific subjects (flames, smoke, waves, ...)
which folder do we put the ipadapter models?
I don't have an Ipadapter folder, do I have to create one?
yes, just create the directory. the README in the repository explains all
Anytime i try running the face_model_comparison workflow and try to run after i have made sure all the models are correct (probably not). I'm encountering a 'size mismatch' error in the IPAdapterApply node, indicating a model architecture mismatch, specifically with proj.0.weight, proj.0.bias, and proj.2.weight dimensions differing between the loaded model checkpoint and the current MLPProjModel architecture.
I cant pin point what it is that is causing the problem so many you or someone else reading this can help. I am thinking maybe its because in the checkpoint it showed you used RealisticVisionV51_v51VAE and i only have the latest version of Realistic Vision V60B1_v51VAE and cant find the versions that says 51 and then 51 for the VAE, i dont exactly know what all this means but hoping i can better learn.
no, the checkpoint version is irrelevant. The problem is maybe the image encoder or the ipadapter models.
@@latentvision could you direct me to the image encoder? I have one but not sure why it’s not working
check here under the installation instructions github.com/cubiq/ComfyUI_IPAdapter_plus
I get this error : Error occurred when executing InsightFaceLoader:
It doesnt matter if ı choose CPU, CUDA or ROCM.
I hate this EVERY SINGLE TIME when I do something with comfyui, I get this fking error.
how would this work with LoRa and controlnet would it make it even more perfect?
yes, absolutely!
Hi Matteo, I cannot find the .safetensors file you have selected in the "Load IPAdapter Model" node anywhere on the internet to download: IPAdapter_imge_encoder_sd15.safetensors can you tell me where to find it? Thank you
please check the extension repository, it's linked in the video description
Is there a tutorial on training a new vision model, and/or ipadapter model?
I have some resources, a smidge of technical know-how, and some data, and wouldn't mind contributing, possibly.
yeah I've been asked a lot about training. It's really easy honestly but it's hardly material for a video, more like written article
@@latentvision Even just a gist with a copy paste, basic outline of the code would mean a lot.
Thank you so much for the helpful video. I am experiencing a bit of problem installing insightface - when I do pip install insightface within my comfyUI's python_embeded, it's throwing me Failed building wheel for insight face error. I am on PC, do you know how to resolve this?
The comfyui reactor node github has a prebuilt whl for windows
@@terbospeed What's really strange is that I can install it just fine with python 3.10 but not with the embedded python - which comfyUI won't detect.
@@newbment yea I'm still trying to get it working on linux. the discussions on the video might yield some results soon though. *Got it working by manually updating IP-Adapter plus, the comfyui manager update method seemed to fail
Have a problem. 'ClipVisionModel' object has no attribute 'get'. Serached internet nothing about it. Please help. Thank you
you are probably selecting the wrong models, hard so say though with so little info
IPAdapter: InsightFace is not installed! How do i solve this? I followed installation on githubs reactor node page for comfyui
Is there a way to keep this really consistent head but also body ? And get pictures from further away, full body, from behind, bird view, cowboy view, etc... ? Thanks
yes of course, this video was more to check how good the model is at plain faces. You can do a first pass of a full body person, then upscale then a second pass only for the face to get the likeliness back.
@@latentvision So in theory I could also do that for a detailed background ?
Like get a good background
Then put a pose and get a good body
Then pass that to get the clothing I want with a mask
Then pass all that to the character face with a mask ?
Thus getting a complete and detailed character in a complete detailed background
I need to learn more about this it’s really interesting thank you for the answer
yeah you probably don't need so many steps, also check my "attention masking" video@@DragonZ3R0
@@latentvision On my way 🤝