Two Methods for Fixing Faces in ComfyUI

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 มิ.ย. 2024
  • This video provides a guide for fixing faces in ComfyUI using two different methods. The workflow uses the TurboVisionXL model which produces high quality results quickly.
    Links
    Simple Face Fix Workflow: comfyworkflows.com/workflows/...
    Advanced Face Fix Workflow: comfyworkflows.com/workflows/...
    TurboVisionXL Model: civitai.com/models/215418/tur...
    Chapters
    0:00 Overview
    0:18 TurboVisionXL Model
    1:14 Simple Face Fix Setup & Walkthrough
    6:54 Advanced Face Fix Setup & Walkthrough
    13:11 Advanced Face Fix with Multiple Subjects
    *Narration created using Elevenlabs' SpeechToSpeech Synthesis
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 24

  • @mikrodizels
    @mikrodizels 2 หลายเดือนก่อน +2

    That FaceDetailer looks amazing, I like creating images with multiple people in them, so faces are the bane of my existence

    • @amorgan5844
      @amorgan5844 2 หลายเดือนก่อน +1

      Its the most discouraging part of making ai art

    • @greypsyche5255
      @greypsyche5255 2 หลายเดือนก่อน +1

      try hands.

  • @PavewayIII-gbu24
    @PavewayIII-gbu24 หลายเดือนก่อน

    Great tutorial, thank you

  • @IMedzon
    @IMedzon 21 วันที่ผ่านมา

    Useful video thanks!

  • @jocg9168
    @jocg9168 3 หลายเดือนก่อน

    Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.

  • @jbnrusnya_should_be_punished
    @jbnrusnya_should_be_punished 26 วันที่ผ่านมา

    Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error:
    Error occurred when executing FaceDetailer:
    The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
    File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)

  • @jroc6745
    @jroc6745 2 หลายเดือนก่อน

    This looks great thanks for sharing. How can this be altered for img2img?

    • @HowDoTutorials
      @HowDoTutorials  2 หลายเดือนก่อน +2

      Here's a modified workflow: comfyworkflows.com/workflows/cd47fbe6-68cc-4f40-8646-dfc62d32eeb4

  • @meadow-maker
    @meadow-maker 14 วันที่ผ่านมา

    you don't explain how to set the node up?????

  • @CornPMV
    @CornPMV หลายเดือนก่อน +1

    One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?

    • @maxehrlich
      @maxehrlich 6 วันที่ผ่านมา

      probably crop that section run the fix and image composite it back in

  • @PIQK.A1
    @PIQK.A1 2 หลายเดือนก่อน

    how to facedetail vid2vid?

  • @AngryApple
    @AngryApple 3 หลายเดือนก่อน

    would a Lightning Model be a plug and play replacement for this, just because of the different License

    • @HowDoTutorials
      @HowDoTutorials  3 หลายเดือนก่อน

      I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.

    • @AngryApple
      @AngryApple 3 หลายเดือนก่อน

      @@HowDoTutorials I will try it, thanks

  • @kamruzzamanuzzal3764
    @kamruzzamanuzzal3764 3 หลายเดือนก่อน

    SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?

    • @HowDoTutorials
      @HowDoTutorials  3 หลายเดือนก่อน +1

      I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it.
      EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.

  • @focus678
    @focus678 2 หลายเดือนก่อน

    What is GPU spec you are using?.

    • @HowDoTutorials
      @HowDoTutorials  2 หลายเดือนก่อน +2

      I'm using a 3090 which is probably something I should mention going forward so people can set their expectations properly. 😅

  • @97BuckeyeGuy
    @97BuckeyeGuy 3 หลายเดือนก่อน

    You have an interesting cadence to your speech. Is this a real voice or AI?

    • @HowDoTutorials
      @HowDoTutorials  3 หลายเดือนก่อน +2

      A bit of both. I record the narration with my real voice, edit out the spaces and ums (mostly), and then pass it through ElevenLabs speech to speech.

    • @97BuckeyeGuy
      @97BuckeyeGuy 3 หลายเดือนก่อน

      @@HowDoTutorials That explains why I kept going back and forth with my opinion on this. Thank you 👍🏼