Two Methods for Fixing Faces in ComfyUI

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ม.ค. 2025

ความคิดเห็น • 32

  • @mikrodizels
    @mikrodizels 10 หลายเดือนก่อน +4

    That FaceDetailer looks amazing, I like creating images with multiple people in them, so faces are the bane of my existence

    • @amorgan5844
      @amorgan5844 10 หลายเดือนก่อน +2

      Its the most discouraging part of making ai art

    • @greypsyche5255
      @greypsyche5255 10 หลายเดือนก่อน +2

      try hands.

  • @mikemaldanado6015
    @mikemaldanado6015 หลายเดือนก่อน

    nice work. most people rush through don't provide workflows and arent so great at explaining. subbed.

  • @JohnSundayBigChin
    @JohnSundayBigChin 6 หลายเดือนก่อน

    Insane!...I already had all the nodes from other tutorials installed but I never knew exactly what each one did. Thanks for sharing your Worflow!

  • @lukeovermind
    @lukeovermind 7 หลายเดือนก่อน

    Thanks having a simple and advaced face detailer is clever . Going to try it, Got a sub from me, keep going!

  • @jocg9168
    @jocg9168 10 หลายเดือนก่อน

    Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.

  • @TheSORCERER-p9l
    @TheSORCERER-p9l 2 หลายเดือนก่อน

    is there a place to plug an image already generated into this to fix?

  • @WatchMysh
    @WatchMysh 5 หลายเดือนก่อน

    Thanks for the tutorial! I've set up the simple face restore as shown but I only get a black image is output while the unrestored image comes out fine. Any ideas?
    //edit: it says "starting restore_face etc ...with fidelity 0.5. Then there's just "prompt executed" and that's it.

  • @valorantacemiyimben
    @valorantacemiyimben 6 หลายเดือนก่อน

    Hello, how can we do professional face changing like this?

  • @meadow-maker
    @meadow-maker 8 หลายเดือนก่อน +1

    you don't explain how to set the node up?????

  • @IMedzon
    @IMedzon 8 หลายเดือนก่อน

    Useful video thanks!

  • @PavewayIII-gbu24
    @PavewayIII-gbu24 9 หลายเดือนก่อน

    Great tutorial, thank you

  • @shivas4831
    @shivas4831 4 หลายเดือนก่อน

    Hi, Does it work from already existing images ...?

  • @CornPMV
    @CornPMV 8 หลายเดือนก่อน +2

    One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?

    • @maxehrlich
      @maxehrlich 7 หลายเดือนก่อน

      probably crop that section run the fix and image composite it back in

  • @PIQK.A1
    @PIQK.A1 9 หลายเดือนก่อน

    how to facedetail vid2vid?

  • @MrTheraseus
    @MrTheraseus 4 หลายเดือนก่อน

    Good tutorial, but a few times on the last step I end up getting the same face on everybody, any idea what the problem is? maybe because on the prompt I say a woman/ a man, but no idea how to fix it. thanks!

  • @AngryApple
    @AngryApple 10 หลายเดือนก่อน

    would a Lightning Model be a plug and play replacement for this, just because of the different License

    • @HowDoTutorials
      @HowDoTutorials  10 หลายเดือนก่อน

      I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.

    • @AngryApple
      @AngryApple 10 หลายเดือนก่อน

      @@HowDoTutorials I will try it, thanks

  • @smert_rashistskiy_pederacii
    @smert_rashistskiy_pederacii 8 หลายเดือนก่อน

    Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error:
    Error occurred when executing FaceDetailer:
    The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
    File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)

    • @bentontramell
      @bentontramell 6 หลายเดือนก่อน

      this sometimes happens when mixing SD and SDXL assets in the workflow.

  • @97BuckeyeGuy
    @97BuckeyeGuy 10 หลายเดือนก่อน

    You have an interesting cadence to your speech. Is this a real voice or AI?

    • @HowDoTutorials
      @HowDoTutorials  10 หลายเดือนก่อน +3

      A bit of both. I record the narration with my real voice, edit out the spaces and ums (mostly), and then pass it through ElevenLabs speech to speech.

    • @97BuckeyeGuy
      @97BuckeyeGuy 10 หลายเดือนก่อน

      @@HowDoTutorials That explains why I kept going back and forth with my opinion on this. Thank you 👍🏼

    • @lukeovermind
      @lukeovermind 7 หลายเดือนก่อน

      @@HowDoTutorials Thats very clever. its a very soothing voice

  • @focus678
    @focus678 10 หลายเดือนก่อน

    What is GPU spec you are using?.

    • @HowDoTutorials
      @HowDoTutorials  10 หลายเดือนก่อน +2

      I'm using a 3090 which is probably something I should mention going forward so people can set their expectations properly. 😅

  • @kamruzzamanuzzal3764
    @kamruzzamanuzzal3764 10 หลายเดือนก่อน

    SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?

    • @HowDoTutorials
      @HowDoTutorials  10 หลายเดือนก่อน +1

      I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it.
      EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.