ComfyUI Multi ID Masking With IPADAPTER Workflow
ฝัง
- เผยแพร่เมื่อ 13 พ.ค. 2024
- In this ComfyUI video, we take 3 sets of images and incorporate them into our final output using IPAdapter and Masking. I am using images of Taylor Swift, Margot Robbie and Jenna Ortega, you can use any image you like. For best results you will need multiple angles of the face. Don't use blurry images or low quality ones for best results.
Workflow:
github.com/GraftingRayman/Com...
Filename: MaskingWithIPAdapter.json
Anything Everywhere:
github.com/chrisgoringe/cg-us...
IPAdapter:
github.com/cubiq/ComfyUI_IPAd...
GR Prompt Selector:
github.com/GraftingRayman/Com...
Ultimate SD Upscale:
github.com/ssitu/ComfyUI_Ulti...
ComfyRoll:
github.com/Suzie1/ComfyUI_Com...
Inspire Custom Nodes:
github.com/ltdrdata/ComfyUI-I...
Model: Juggernaut Reborn:
civitai.com/models/46422/jugg...
#0013
#ComfyUI #upscale #anthingeverywhere #ipadapter #inpainting #masking #imageenhancement - วิทยาศาสตร์และเทคโนโลยี
Nice tutorial! I've learned a lot!
Useful, lot of info, easy to understand, thanks.
Glad it was helpful!
great for learning
You are a pro. I love your videos thank you
Very useful tutorial, thank you so much!
Glad it was helpful!
Great work. I hope people will start noticing your channel. 😀
I hope so too!
you are amazing~
Subbed, cool help, thanks
Awesome, thank you!
Awesome video sir. +1 Sub.
Very nice video. Useful tips.
Still got a few questions, if you don't mind.
Is there any reason why you use the nodes [ loraLoader Model Only + IPAdapter Model Loader + clipVision + insightFace ] instead of the IPAdapter Unified loader FaceID?
For the face reference, I see you use a batch load, but do not use a "Prep Image For ClipVision". I've seen this node used in many workflows. But maybe you did prepare manually your reference images. Did you make something special to your dataset, like resizing them or cropping them in the first place?
Anyway, I used to run a second Ksampler at 0.5 denoising. Didn't think about running a third one. I'll try that out, nice idea.
Thanks again, good job.
The unified loader sometimes does not work for me, not sure why, tried a few different workflows and they seem to crap out, swapped for the standard version which works a treat. I use crop face node prior to this workflow to save the faces only.
good done, please how can I make showing laser light for nodes
thats with anything everywhere node
Question, can add FaceDetailer after the Ksampler so that the different faces don't get averaged to be the same?
Yes you can, I have done myself, but results show ksampler does a good job as it is
Can't find.. how you made this glowing effect on routes?
That is the Anything Everywhere node
I love it, but how do I make the three faces look the same after upscaling? Right now, they still look different
Are you running them through a ksampler with lower noise after the upscale?
Do you have a link to download the mask files or do we need to create them ourselves? On you rGithub I noticed you have a multi-mask create node, but I think it doesn't create part as transparent? Also, how do you get the links to the Anything Everywhere to light up as you do in the video? Is it a setting somewhere once the node is installed? Thanks!
You can use the Multi Mask Create, its transparent. The Anything Everywhere node has settings in the ComfyUI settings, its titled Anything Everywhere anime UE Links, select Both as the option and further below change Anything Everywhere show links to Selected and Mouseover node
@@GraftingRayman Thanks for the reply. I'll try generating the masks again - I just had the Multimask Create linked to Mask Previews then right clicked and saved the images. They just looked different to your video (they were black and white strips rather than black and grey) and the flow didn't seem to work for me. Not sure if it was because I only have 1 picture and so swapped the Load Image Batch for a Load Image node. I guess you could also possibly have the Multimask Create in this flow directly generating the masked images? I'll give it a try. I'll also take a look in the settings as advised. Cheers.
@@UTA999 I use the multi mask create node on my updated workflow, works just the same, when you save the image it does not keep the transparency, when used in ComfyUI it does.
@@GraftingRayman Thanks for the confirmation. After a bit of playing around I now have all the issues I was experiencing sorted.
I've been trying to use this with SDXL and I'm finding the images don't seem to be coming out with faces that look much like my reference pictures. Had 3 quick questions:
1. When you said SDXL wasn't working well for you in the video, did you mean just in general or in the case of this specific workflow in that the faces didn't seem to come out? 2: Do the reference images need to be close ups of the face, or will this work with full body reference photos as well? They can be jpegs right? 3: If I'm only making an image of a single person, it should be okay to use a mask that is just a transparent png image without the black part right?
EDIT: Actually, I think I have it working better now, just needed to adjust some of the models I was using... think I had the wrong IPAdapter model selected hehehe eeer.. I'm not the smartest. Thanks again!
Hi @n3bie, reference images work best if they are head shots. Not been having much luck with multi person generations with sdxl, works fine for single person but soon as I get a 2nd or 3rd person involved, it craps out, but that is using InstantID, works fine with FaceID
@@GraftingRayman Ah I see, I actually only built a single character workflow, but it's nice to have that info for when I try to expand it. I have to say tho, this workflow as a single character generator is working fantastically for me using SDXL. I deepened the reference images to about 20, maybe a third of those are close up portraits as you suggested, but after I was having trouble generating anything but close ups of the face, I threw a bunch more in the folder including full body poses, and I'm having pretty good results generating a lot of different poses that all look like the reference model. If anybody comes across this, I'm using Darker Tanned Skin off of CivitAI for a SDXL compatible LoRA, and it's working quite nicely. Thanks again Rayman!
This is a very useful tutorial and you make an impression of someone who's proficient in this field of work. Being an author of the "GR Prompt Selector" node included in the workflow, could you provide instructions on how to get it running? I've seen multiple people having problems with it down in the comments. Cloning the repo into the "custom_nodes" folder obviously isn't enough for some reason.
Not really a coder, from what I noticed, the requirements were missing for some, I have added a requirements.txt file in the repo not too long ago, this can be installed using "pip install -r requirements.txt" in the nodes root, this will fix most if not all issues.
@@GraftingRayman I've done that already. ComfyUI is still unable to load the nodes. In one of your replies to a different comment you talked about having a "clip" folder in "python_embeded". How can that be achieved?
a lot of people have a system python and a python in the portable version of comfyui, when you do "pip install clip" it installs it in the system python, you need to run the embedded python to install, in your comfyui folder, run the following command ".\python_embeded\python.exe -m pip install git+github.com/openai/CLIP.git" this will install clip in the correct place
@@GraftingRayman that's it! it works now, thanks for your help.
damn bro. whats ur pc specs? FaceID runs so slow on my old ass machine
haha, that speed is done by editing, its still slow as anything
Please make a workflow for SDXL also, with InstaID for multiple people
the results are very crappy with instant id for multiple people, not worth the effort
Getting below error,
Error occurred when executing VAEDecode:
'VAE' object has no attribute 'vae_dtype'
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
how much vram do you have on your gpu?
Hi, the workflow connected, but the result cant make the face same as source. any solution? the only node i change is clip text prompt to default one.
Can you send me a screenshot on discord or github?
Please help, i got message (IMPORT FAILED) GR Prompt Selector in manager
what is the full error?
Dear Sir @@GraftingRayman, here is the log: File "C:\Users\ccc\OneDrive\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman\__init__.py", line 1, in
from .GRnodes import GRPromptSelector, GRImageResize, GRMaskResize, GRMaskCreate, GRMultiMaskCreate, GRImageSize, GRTileImage, GRPromptSelectorMulti, GRTileFlipImage, GRMaskCreateRandom, GRStackImage, GRResizeImageMethods, GRImageDetailsDisplayer, GRImageDetailsSave
File "C:\Users\ccc\OneDrive\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman\GRnodes.py", line 11, in
from clip import tokenize, model
ModuleNotFoundError: No module named 'clip'
you may try "pip install clip", that should install the clip package
@@GraftingRayman I have both Auto1111 and Comfyui portable installed. The clip package got copied to the Pinokio Miniconda folder from where I copied into Comfy Lib. Still wasn't able to load the node. Import failed via manager and also the same result when git clone, same content is the folder. I also seem to have Onnx and Onnxruntime installed.
Both the clip and the clip info folders were copied to ComfyUI_windows_portable\python_embeded\Lib\site-packages?
Getting below error,
File "C:\Users\Mahesh\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman
odes.py", line 8, in
from clip import tokenize, model
ModuleNotFoundError: No module named 'clip'
run the following: pip install git+github.com/openai/CLIP.git
"When loading the graph, the following node types were not found:
GR Prompt Selector
Nodes that have failed to load will show as red on the graph."
it's inside ComfyUI\custom_nodes but not work
Help me please.
run the following command in your custom_nodes folder or use comfyui manager, "Git clone github.com/GraftingRayman/ComfyUI_GraftingRayman"
@@GraftingRayman "It's inside ComfyUI\custom_nodes but not work" - it's installed, but "Failed to load"
@@GraftingRayman maybe it the same problem:
```
DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)...
2024-05-20 16:46:37.3574522 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Ai\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-05-20 16:46:38.4698587 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Ai\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
```
I have no any D:\a folder at all, by the way
delete the folder for the node and reinstall
- Value not in list: vae_name: 'AnimateEveryone\diffusion_pytorch_model.bin' not in []
getting above error not able to find this VAE
If you have put the model in a different folder, you will need to change it, i manually copied mine in to the checkpoints\animateanyone folder
Please help, i got message (IMPORT FAILED GraftingRayman) GR Prompt Selector in manager as well..i have used Stability Matrix
try running this in your comfyui folder ".\python_embeded\python.exe -m pip install git+github.com/openai/CLIP.git"
@@GraftingRayman Thank U so much! it works~
Seems that your custom nodes extension, as well as the Inspire pack, don't import for me. I installed the CLIP dependencies but it didn't help.
What error do you get?
@@GraftingRayman Regarding both Inspire pack and your extension it seems it has to do with a "c2" module that was not found. Looking a bit into the issue it seems I maybe have just python v3 installed and maybe i am required to use a 2.x.x version of Python instead?
Python version v3 is fine, I am not aware of a C2 module that is required, will look into it
@@GraftingRayman I'm sorry i wrote it wrong, i meant CV2 module is not found when importing both of those extensions. Apologies.
@@thefransvan5966 Aaah, you can simply run "pip install cv2" if you have system python or if you are using ComfyUI portable you can run in your ComfyUI folder ".\python_embeded\python.exe -m pip install cv2", that will resolve the issue with cv2
Hey giit a sub from me, thanks this was great. Does adding a ksampler after SD Upscale in general improve quality?
Yes it does
can you share your mask.png files for us? It's very useful
you can use my mask create node instead, check my github link in the bio
@@GraftingRayman Thank you~
thanks, workflow link wont work
if you right click the link and save the file as a .JSON it will work
@@GraftingRayman No, still not working, apparently is a pastebin issue
@@RodiZai-pk9ty you can download it from my github github.com/GraftingRayman/ComfyUI_GR_PromptSelector/tree/main/Workflows