ComfyUI Multi ID Masking With IPADAPTER Workflow

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 พ.ค. 2024
  • In this ComfyUI video, we take 3 sets of images and incorporate them into our final output using IPAdapter and Masking. I am using images of Taylor Swift, Margot Robbie and Jenna Ortega, you can use any image you like. For best results you will need multiple angles of the face. Don't use blurry images or low quality ones for best results.
    Workflow:
    github.com/GraftingRayman/Com...
    Filename: MaskingWithIPAdapter.json
    Anything Everywhere:
    github.com/chrisgoringe/cg-us...
    IPAdapter:
    github.com/cubiq/ComfyUI_IPAd...
    GR Prompt Selector:
    github.com/GraftingRayman/Com...
    Ultimate SD Upscale:
    github.com/ssitu/ComfyUI_Ulti...
    ComfyRoll:
    github.com/Suzie1/ComfyUI_Com...
    Inspire Custom Nodes:
    github.com/ltdrdata/ComfyUI-I...
    Model: Juggernaut Reborn:
    civitai.com/models/46422/jugg...
    #0013
    #ComfyUI #upscale #anthingeverywhere #ipadapter #inpainting #masking #imageenhancement
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 81

  • @user-gt9qq9uf9q
    @user-gt9qq9uf9q หลายเดือนก่อน +2

    Nice tutorial! I've learned a lot!

  • @BruknerZigfreed
    @BruknerZigfreed หลายเดือนก่อน +2

    Useful, lot of info, easy to understand, thanks.

  • @baseerfarooqui5897
    @baseerfarooqui5897 วันที่ผ่านมา +1

    great for learning

  • @mattc3510
    @mattc3510 21 วันที่ผ่านมา +2

    You are a pro. I love your videos thank you

  • @adriantang5811
    @adriantang5811 หลายเดือนก่อน +1

    Very useful tutorial, thank you so much!

  • @kwlook90
    @kwlook90 หลายเดือนก่อน +1

    Great work. I hope people will start noticing your channel. 😀

  • @jinxing-xv3py
    @jinxing-xv3py 8 วันที่ผ่านมา +1

    you are amazing~

  • @aeit999
    @aeit999 หลายเดือนก่อน

    Subbed, cool help, thanks

  • @n3bie
    @n3bie หลายเดือนก่อน +1

    Awesome video sir. +1 Sub.

  • @PierreHenriBessou
    @PierreHenriBessou หลายเดือนก่อน

    Very nice video. Useful tips.
    Still got a few questions, if you don't mind.
    Is there any reason why you use the nodes [ loraLoader Model Only + IPAdapter Model Loader + clipVision + insightFace ] instead of the IPAdapter Unified loader FaceID?
    For the face reference, I see you use a batch load, but do not use a "Prep Image For ClipVision". I've seen this node used in many workflows. But maybe you did prepare manually your reference images. Did you make something special to your dataset, like resizing them or cropping them in the first place?
    Anyway, I used to run a second Ksampler at 0.5 denoising. Didn't think about running a third one. I'll try that out, nice idea.
    Thanks again, good job.

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน +2

      The unified loader sometimes does not work for me, not sure why, tried a few different workflows and they seem to crap out, swapped for the standard version which works a treat. I use crop face node prior to this workflow to save the faces only.

  • @zraieee
    @zraieee หลายเดือนก่อน +1

    good done, please how can I make showing laser light for nodes

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน +1

      thats with anything everywhere node

  • @Teardropbrut
    @Teardropbrut 24 วันที่ผ่านมา

    Question, can add FaceDetailer after the Ksampler so that the different faces don't get averaged to be the same?

    • @GraftingRayman
      @GraftingRayman  23 วันที่ผ่านมา

      Yes you can, I have done myself, but results show ksampler does a good job as it is

  • @r1nnk
    @r1nnk 10 วันที่ผ่านมา

    Can't find.. how you made this glowing effect on routes?

    • @GraftingRayman
      @GraftingRayman  9 วันที่ผ่านมา

      That is the Anything Everywhere node

  • @frustasistumbleguys4900
    @frustasistumbleguys4900 หลายเดือนก่อน

    I love it, but how do I make the three faces look the same after upscaling? Right now, they still look different

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      Are you running them through a ksampler with lower noise after the upscale?

  • @UTA999
    @UTA999 2 วันที่ผ่านมา

    Do you have a link to download the mask files or do we need to create them ourselves? On you rGithub I noticed you have a multi-mask create node, but I think it doesn't create part as transparent? Also, how do you get the links to the Anything Everywhere to light up as you do in the video? Is it a setting somewhere once the node is installed? Thanks!

    • @GraftingRayman
      @GraftingRayman  2 วันที่ผ่านมา

      You can use the Multi Mask Create, its transparent. The Anything Everywhere node has settings in the ComfyUI settings, its titled Anything Everywhere anime UE Links, select Both as the option and further below change Anything Everywhere show links to Selected and Mouseover node

    • @UTA999
      @UTA999 2 วันที่ผ่านมา +1

      @@GraftingRayman Thanks for the reply. I'll try generating the masks again - I just had the Multimask Create linked to Mask Previews then right clicked and saved the images. They just looked different to your video (they were black and white strips rather than black and grey) and the flow didn't seem to work for me. Not sure if it was because I only have 1 picture and so swapped the Load Image Batch for a Load Image node. I guess you could also possibly have the Multimask Create in this flow directly generating the masked images? I'll give it a try. I'll also take a look in the settings as advised. Cheers.

    • @GraftingRayman
      @GraftingRayman  วันที่ผ่านมา

      @@UTA999 I use the multi mask create node on my updated workflow, works just the same, when you save the image it does not keep the transparency, when used in ComfyUI it does.

    • @UTA999
      @UTA999 วันที่ผ่านมา

      @@GraftingRayman Thanks for the confirmation. After a bit of playing around I now have all the issues I was experiencing sorted.

  • @n3bie
    @n3bie หลายเดือนก่อน +1

    I've been trying to use this with SDXL and I'm finding the images don't seem to be coming out with faces that look much like my reference pictures. Had 3 quick questions:
    1. When you said SDXL wasn't working well for you in the video, did you mean just in general or in the case of this specific workflow in that the faces didn't seem to come out? 2: Do the reference images need to be close ups of the face, or will this work with full body reference photos as well? They can be jpegs right? 3: If I'm only making an image of a single person, it should be okay to use a mask that is just a transparent png image without the black part right?
    EDIT: Actually, I think I have it working better now, just needed to adjust some of the models I was using... think I had the wrong IPAdapter model selected hehehe eeer.. I'm not the smartest. Thanks again!

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน +1

      Hi @n3bie, reference images work best if they are head shots. Not been having much luck with multi person generations with sdxl, works fine for single person but soon as I get a 2nd or 3rd person involved, it craps out, but that is using InstantID, works fine with FaceID

    • @n3bie
      @n3bie หลายเดือนก่อน +1

      @@GraftingRayman Ah I see, I actually only built a single character workflow, but it's nice to have that info for when I try to expand it. I have to say tho, this workflow as a single character generator is working fantastically for me using SDXL. I deepened the reference images to about 20, maybe a third of those are close up portraits as you suggested, but after I was having trouble generating anything but close ups of the face, I threw a bunch more in the folder including full body poses, and I'm having pretty good results generating a lot of different poses that all look like the reference model. If anybody comes across this, I'm using Darker Tanned Skin off of CivitAI for a SDXL compatible LoRA, and it's working quite nicely. Thanks again Rayman!

  • @DaCashRap
    @DaCashRap 6 วันที่ผ่านมา

    This is a very useful tutorial and you make an impression of someone who's proficient in this field of work. Being an author of the "GR Prompt Selector" node included in the workflow, could you provide instructions on how to get it running? I've seen multiple people having problems with it down in the comments. Cloning the repo into the "custom_nodes" folder obviously isn't enough for some reason.

    • @GraftingRayman
      @GraftingRayman  6 วันที่ผ่านมา

      Not really a coder, from what I noticed, the requirements were missing for some, I have added a requirements.txt file in the repo not too long ago, this can be installed using "pip install -r requirements.txt" in the nodes root, this will fix most if not all issues.

    • @DaCashRap
      @DaCashRap 6 วันที่ผ่านมา

      @@GraftingRayman I've done that already. ComfyUI is still unable to load the nodes. In one of your replies to a different comment you talked about having a "clip" folder in "python_embeded". How can that be achieved?

    • @GraftingRayman
      @GraftingRayman  6 วันที่ผ่านมา +1

      a lot of people have a system python and a python in the portable version of comfyui, when you do "pip install clip" it installs it in the system python, you need to run the embedded python to install, in your comfyui folder, run the following command ".\python_embeded\python.exe -m pip install git+github.com/openai/CLIP.git" this will install clip in the correct place

    • @DaCashRap
      @DaCashRap 6 วันที่ผ่านมา +1

      @@GraftingRayman that's it! it works now, thanks for your help.

  • @badterry
    @badterry หลายเดือนก่อน

    damn bro. whats ur pc specs? FaceID runs so slow on my old ass machine

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน +1

      haha, that speed is done by editing, its still slow as anything

  • @mahesh001234
    @mahesh001234 หลายเดือนก่อน

    Please make a workflow for SDXL also, with InstaID for multiple people

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      the results are very crappy with instant id for multiple people, not worth the effort

  • @CravingWatermelon
    @CravingWatermelon หลายเดือนก่อน

    Getting below error,
    Error occurred when executing VAEDecode:
    'VAE' object has no attribute 'vae_dtype'
    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      how much vram do you have on your gpu?

  • @jd38
    @jd38 2 วันที่ผ่านมา

    Hi, the workflow connected, but the result cant make the face same as source. any solution? the only node i change is clip text prompt to default one.

    • @GraftingRayman
      @GraftingRayman  2 วันที่ผ่านมา

      Can you send me a screenshot on discord or github?

  • @HeangBorin
    @HeangBorin หลายเดือนก่อน +2

    Please help, i got message (IMPORT FAILED) GR Prompt Selector in manager

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      what is the full error?

    • @HeangBorin
      @HeangBorin 28 วันที่ผ่านมา

      Dear Sir @@GraftingRayman, here is the log: File "C:\Users\ccc\OneDrive\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman\__init__.py", line 1, in
      from .GRnodes import GRPromptSelector, GRImageResize, GRMaskResize, GRMaskCreate, GRMultiMaskCreate, GRImageSize, GRTileImage, GRPromptSelectorMulti, GRTileFlipImage, GRMaskCreateRandom, GRStackImage, GRResizeImageMethods, GRImageDetailsDisplayer, GRImageDetailsSave
      File "C:\Users\ccc\OneDrive\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman\GRnodes.py", line 11, in
      from clip import tokenize, model
      ModuleNotFoundError: No module named 'clip'

    • @GraftingRayman
      @GraftingRayman  28 วันที่ผ่านมา

      you may try "pip install clip", that should install the clip package

    • @Teardropbrut
      @Teardropbrut 24 วันที่ผ่านมา

      @@GraftingRayman I have both Auto1111 and Comfyui portable installed. The clip package got copied to the Pinokio Miniconda folder from where I copied into Comfy Lib. Still wasn't able to load the node. Import failed via manager and also the same result when git clone, same content is the folder. I also seem to have Onnx and Onnxruntime installed.

    • @GraftingRayman
      @GraftingRayman  23 วันที่ผ่านมา

      Both the clip and the clip info folders were copied to ComfyUI_windows_portable\python_embeded\Lib\site-packages?

  • @mahesh001234
    @mahesh001234 หลายเดือนก่อน

    Getting below error,
    File "C:\Users\Mahesh\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman
    odes.py", line 8, in
    from clip import tokenize, model
    ModuleNotFoundError: No module named 'clip'

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน +2

      run the following: pip install git+github.com/openai/CLIP.git

  • @user-wl4my6ue7e
    @user-wl4my6ue7e หลายเดือนก่อน +1

    "When loading the graph, the following node types were not found:
    GR Prompt Selector
    Nodes that have failed to load will show as red on the graph."
    it's inside ComfyUI\custom_nodes but not work
    Help me please.

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      run the following command in your custom_nodes folder or use comfyui manager, "Git clone github.com/GraftingRayman/ComfyUI_GraftingRayman"

    • @user-wl4my6ue7e
      @user-wl4my6ue7e หลายเดือนก่อน +1

      @@GraftingRayman "It's inside ComfyUI\custom_nodes but not work" - it's installed, but "Failed to load"

    • @user-wl4my6ue7e
      @user-wl4my6ue7e หลายเดือนก่อน

      @@GraftingRayman maybe it the same problem:
      ```
      DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)...
      2024-05-20 16:46:37.3574522 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Ai\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
      *************** EP Error ***************
      EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
      when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
      Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
      ****************************************
      2024-05-20 16:46:38.4698587 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Ai\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
      *************** EP Error ***************
      EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
      when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
      Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
      ****************************************
      ```

    • @user-wl4my6ue7e
      @user-wl4my6ue7e หลายเดือนก่อน

      I have no any D:\a folder at all, by the way

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      delete the folder for the node and reinstall

  • @webrest
    @webrest 9 วันที่ผ่านมา

    - Value not in list: vae_name: 'AnimateEveryone\diffusion_pytorch_model.bin' not in []
    getting above error not able to find this VAE

    • @GraftingRayman
      @GraftingRayman  9 วันที่ผ่านมา

      If you have put the model in a different folder, you will need to change it, i manually copied mine in to the checkpoints\animateanyone folder

  • @jymercy5059
    @jymercy5059 5 วันที่ผ่านมา

    Please help, i got message (IMPORT FAILED GraftingRayman) GR Prompt Selector in manager as well..i have used Stability Matrix

    • @GraftingRayman
      @GraftingRayman  5 วันที่ผ่านมา

      try running this in your comfyui folder ".\python_embeded\python.exe -m pip install git+github.com/openai/CLIP.git"

    • @jymercy5059
      @jymercy5059 5 วันที่ผ่านมา +1

      @@GraftingRayman Thank U so much! it works~

  • @thefransvan5966
    @thefransvan5966 2 วันที่ผ่านมา

    Seems that your custom nodes extension, as well as the Inspire pack, don't import for me. I installed the CLIP dependencies but it didn't help.

    • @GraftingRayman
      @GraftingRayman  2 วันที่ผ่านมา +1

      What error do you get?

    • @thefransvan5966
      @thefransvan5966 วันที่ผ่านมา

      @@GraftingRayman Regarding both Inspire pack and your extension it seems it has to do with a "c2" module that was not found. Looking a bit into the issue it seems I maybe have just python v3 installed and maybe i am required to use a 2.x.x version of Python instead?

    • @GraftingRayman
      @GraftingRayman  วันที่ผ่านมา

      Python version v3 is fine, I am not aware of a C2 module that is required, will look into it

    • @thefransvan5966
      @thefransvan5966 วันที่ผ่านมา

      @@GraftingRayman I'm sorry i wrote it wrong, i meant CV2 module is not found when importing both of those extensions. Apologies.

    • @GraftingRayman
      @GraftingRayman  วันที่ผ่านมา

      @@thefransvan5966 Aaah, you can simply run "pip install cv2" if you have system python or if you are using ComfyUI portable you can run in your ComfyUI folder ".\python_embeded\python.exe -m pip install cv2", that will resolve the issue with cv2

  • @lukeovermind
    @lukeovermind 11 วันที่ผ่านมา +1

    Hey giit a sub from me, thanks this was great. Does adding a ksampler after SD Upscale in general improve quality?

  • @dennischo2140
    @dennischo2140 หลายเดือนก่อน

    can you share your mask.png files for us? It's very useful

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน +1

      you can use my mask create node instead, check my github link in the bio

    • @dennischo2140
      @dennischo2140 หลายเดือนก่อน

      @@GraftingRayman Thank you~

  • @cemilhaci2
    @cemilhaci2 หลายเดือนก่อน

    thanks, workflow link wont work

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      if you right click the link and save the file as a .JSON it will work

    • @RodiZai-pk9ty
      @RodiZai-pk9ty หลายเดือนก่อน

      @@GraftingRayman No, still not working, apparently is a pastebin issue

    • @GraftingRayman
      @GraftingRayman  หลายเดือนก่อน

      @@RodiZai-pk9ty you can download it from my github github.com/GraftingRayman/ComfyUI_GR_PromptSelector/tree/main/Workflows