How Do?
How Do?
  • 8
  • 50 690
Two Methods for Fixing Faces in ComfyUI
This video provides a guide for fixing faces in ComfyUI using two different methods. The workflow uses the TurboVisionXL model which produces high quality results quickly.
Links
Simple Face Fix Workflow: comfyworkflows.com/workflows/a7672ac1-75c8-4759-a1d7-71a207728045
Advanced Face Fix Workflow: comfyworkflows.com/workflows/2ff67296-33d2-4fe3-a2be-bd2aa32748ac
TurboVisionXL Model: civitai.com/models/215418/turbovisionxl-super-fast-xl-based-on-new-sdxl-turbo-3-5-step-quality-output-at-high-resolutions
Chapters
0:00 Overview
0:18 TurboVisionXL Model
1:14 Simple Face Fix Setup & Walkthrough
6:54 Advanced Face Fix Setup & Walkthrough
13:11 Advanced Face Fix with Multiple Subjects
*Narration created using Elevenlabs' SpeechToSpeech Synthesis
มุมมอง: 8 016

วีดีโอ

Stable Cascade in ComfyUI with Updated Method and Custom Workflows
มุมมอง 8K4 หลายเดือนก่อน
This video provides a guide for running Stable Cascade in ComfyUI with the updated models as well as some custom workflows that address the less desirable aspects of Stable Cascade. Links Stable Cascade ComfyUI Examples: comfyanonymous.github.io/ComfyUI_examples/stable_cascade/ Stable Cascade ComfyUI Checkpoints: huggingface.co/stabilityai/stable-cascade/tree/main/comfyui_checkpoints Stable Cas...
Stable Cascade in ComfyUI Made Simple
มุมมอง 10K5 หลายเดือนก่อน
This video provides a guide for running Stable Cascade in ComfyUI. UPDATE comfyanonymous has released an updated method with img2img and more. Updated video here: th-cam.com/video/GOnMXejA8Fc/w-d-xo.html Links ComfyUI Workflow: comfyworkflows.com/workflows/15b50c1e-f6f7-447b-b46d-f233c4848cbc Stable Cascade Models: huggingface.co/stabilityai/stable-cascade/tree/main Chapters 0:00 Overview 0:21 ...
Local, Free-Range, AI Chat
มุมมอง 5487 หลายเดือนก่อน
This video provides a guide to installing LM Studio, including instructions on how to get started with the Dolphin 2.5 fine-tune of the Mixtral 8x7B Mixture of Experts model. *Note: There appears to be an issue with repetitive text outputs in the Mixtral model. For best results at the moment, you might try using OpenHermes 2.5 Mistral 7B, which also has the advantage of being a much smaller mod...
Install ComfyUI from Scratch
มุมมอง 1.3K7 หลายเดือนก่อน
This video provides a guide for installing ComfyUI. The guide assumes you are using Windows and have none of the necessary pieces installed. Tips I forgot to mention in the video: - You can create a desktop shortcut for ComfyUI by right-clicking on the .bat file, clicking "Show More Options" then "Send to" then "Desktop (Create shortcut)". - You can update ComfyUI using ComfyUI Manager. Just cl...
Colorize and Restore Old Images
มุมมอง 4.1K7 หลายเดือนก่อน
This video provides a guide for colorizing and restoring old images using Unsampling and ControlNets in ComfyUI with Stable Diffusion. This is a follow-up of my video on "Reimagining" images and uses the same workflow with a few tweaks. For a more in-depth look at how that workflow works, you can watch that video here: th-cam.com/video/CRURtIltf58/w-d-xo.html Links Custom Workflow: comfyworkflo...
Reimagine Any Image in ComfyUI
มุมมอง 16K7 หลายเดือนก่อน
This video provides a guide for recreating and "reimagining" any image using Unsampling and ControlNets in ComfyUI with Stable Diffusion. Links Custom Workflow: comfyworkflows.com/workflows/94b32ebe-e2be-4f44-b341-bc4793fe4941 ContolNet Aux Preprocessors: github.com/Fannovel16/comfyui_controlnet_aux ComfyUI Noise Nodes: github.com/BlenderNeko/ComfyUI_Noise Stable Diffusion 1.5: huggingface.co/r...
How Do Stable Video Diffusion?
มุมมอง 3.2K7 หลายเดือนก่อน
This video provides a guide for creating video clips using ComfyUI with Stable Video Diffusion, including custom workflows that improve upon existing examples, adding upscaling and a great solution for frame interpolation. * The "Improved" workflows below include tweaks discovered after releasing this video to improve quality of outputs. Links Custom Workflows: drive.google.com/drive/folders/1Z...

ความคิดเห็น

  • @JohnSundayBigChin
    @JohnSundayBigChin 3 วันที่ผ่านมา

    Insane!...I already had all the nodes from other tutorials installed but I never knew exactly what each one did. Thanks for sharing your Worflow!

  • @valorantacemiyimben
    @valorantacemiyimben 14 วันที่ผ่านมา

    Hello, how can we do professional face changing like this?

  • @soljr9175
    @soljr9175 29 วันที่ผ่านมา

    Your workflow link doesnt work. It would have be nice if you included in Hugginface.

  • @kevint.8553
    @kevint.8553 หลายเดือนก่อน

    I successfully installed the Manager, but don't see the manager options on the UI page.

  • @lukeovermind
    @lukeovermind หลายเดือนก่อน

    Thanks having a simple and advaced face detailer is clever . Going to try it, Got a sub from me, keep going!

  • @bobwinberry
    @bobwinberry หลายเดือนก่อน

    Thanks for your videos! They worked great, but now (due to updates?) This workflow no longer works, seems to be lacking the BNK_Unsampler. Is there a work around for this? I've tried but aside from stumbling around, this si way over my head. Thanks for any help you might have and thanks again for the videos - well done!

  • @FiXANoNada
    @FiXANoNada หลายเดือนก่อน

    finally, some guide that i can comprehend and follow, and then even play around with it. You are so kind to even list all the resources in the description in an well-organized manner. instant sub from me.

  • @meadow-maker
    @meadow-maker หลายเดือนก่อน

    you don't explain how to set the node up?????

  • @nawafalhinai1643
    @nawafalhinai1643 หลายเดือนก่อน

    were should i put all that files in the links?

  • @IMedzon
    @IMedzon 2 หลายเดือนก่อน

    Useful video thanks!

  • @jbnrusnya_should_be_punished
    @jbnrusnya_should_be_punished 2 หลายเดือนก่อน

    Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error: Error occurred when executing FaceDetailer: The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1 File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)

    • @bentontramell
      @bentontramell 8 วันที่ผ่านมา

      this sometimes happens when mixing SD and SDXL assets in the workflow.

  • @CornPMV
    @CornPMV 2 หลายเดือนก่อน

    One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?

    • @maxehrlich
      @maxehrlich หลายเดือนก่อน

      probably crop that section run the fix and image composite it back in

  • @zhongxun2005
    @zhongxun2005 2 หลายเดือนก่อน

    Thank you for sharing! Subscribed:) I have a question about AIO Aux Preprocessor in 2 SDXL workflow. I don't see LineartStandardPreprocessor option, closet one is LineartPreprocessor, but it throws error "Error occurred when executing AIO_Preprocessor: LineartDetector.from_pretrained() got an unexpected keyword argument 'cache_dir'"

    • @zhongxun2005
      @zhongxun2005 2 หลายเดือนก่อน

      Never mind, I have it resolved. I replaced both with "[Inference.Core] AIO Aux Preprocessor" which has the option. Hope this could help others

  • @PavewayIII-gbu24
    @PavewayIII-gbu24 2 หลายเดือนก่อน

    Great tutorial, thank you

  •  2 หลายเดือนก่อน

    This is a wonderfully good job! I just found it and it works amazingly well! Do you have a workflow that does the same thing img2img?

  • @goactivemedia
    @goactivemedia 2 หลายเดือนก่อน

    When I run this I get - The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device?

  • @Mranshumansinghr
    @Mranshumansinghr 3 หลายเดือนก่อน

    Much Better explanation of Cascade in Comfiui. Thank you. Will try this today. The b to c and then a is a bit confusing and works sometimes. This is much simpler and requires less files.

  • @aliyilmaz852
    @aliyilmaz852 3 หลายเดือนก่อน

    Amazing share! Thanks again, I am old and have lots of b/w photos. Will give it a try. And If I can I will try to swap the faces with current ones :) Maybe you can teach us how to swap faces, will definitely appreciate it!

  • @PIQK.A1
    @PIQK.A1 3 หลายเดือนก่อน

    how to facedetail vid2vid?

  • @cheezeebred
    @cheezeebred 3 หลายเดือนก่อน

    I'm missing the BNK_UNsampler and cant find it via google search. What am i doing wrong? Cant find it in manager either

  • @lumina36
    @lumina36 3 หลายเดือนก่อน

    im amazed that no one has ever thought of Combining Stable Forge with Both Krita and Cascade it would actually solve a lot of problems

  • @SumNumber
    @SumNumber 3 หลายเดือนก่อน

    This is cool but it is just about impossible to see how you connected all these nodes together so it did not help me at all. :O)

    • @HowDoTutorials
      @HowDoTutorials 3 หลายเดือนก่อน

      Yeah I’ve been working on making things a little easier to parse going forward. There’s a link to the workflow in the description if you want to load it up and poke around a bit.

  • @aliyilmaz852
    @aliyilmaz852 3 หลายเดือนก่อน

    thanks for the great explanation, hope you do videos like that more.

  • @focus678
    @focus678 3 หลายเดือนก่อน

    What is GPU spec you are using?.

    • @HowDoTutorials
      @HowDoTutorials 3 หลายเดือนก่อน

      I'm using a 3090 which is probably something I should mention going forward so people can set their expectations properly. 😅

  • @onurc.6944
    @onurc.6944 3 หลายเดือนก่อน

    When it comes to svd decoder the connection is lost :(

    • @HowDoTutorials
      @HowDoTutorials 3 หลายเดือนก่อน

      Sorry to hear it's giving you trouble. Here are a couple things to try: 1. Make sure you're using the correct decoder model for your SVD model. (e.g. If using the "xt" model be sure you're using the "xt" decoder) 2. You may be running out of memory. Try lowering the `video_frames` parameter. You might also try using the non-xt model and decoder.

    • @onurc.6944
      @onurc.6944 3 หลายเดือนก่อน

      Thanks ur help :) I can work without image_decoder@@HowDoTutorials

  • @RuinDweller
    @RuinDweller 4 หลายเดือนก่อน

    After I discovered ComfyUI, my life changed forever. It has been a dream of mine for 5 years now to be able to run models and manipulate their latent spaces locally. ...But then I discovered just how hard it is for a noob like me to get a lot of these workflows working - at all - even after downloading and installing all of the models required, in the proper versions, and all of the nodes loaded and running together normally. This, was one of about 3 that actually worked for me, and it is BY FAR my favorite one. It was downloaded as a "color restorer" and it works beautifully for that purpose, but I was so excited to see it featured in this video, because it already works for me! Now I can unlock its full potential, and it turns out all I needed were the proper prompts! THANK YOU so much for making these workflows, and these video tutorials, I can't tell you how much you've helped me! If you ever decide to update any of this to utilize SDXL, I am so on that...

    • @HowDoTutorials
      @HowDoTutorials 3 หลายเดือนก่อน

      I loved reading this comment and I'm so happy I could help make this tech a bit more accessible. Here's a version of the "Reimagine" workflow updated for SDXL: comfyworkflows.com/workflows/4fc27d23-faf3-4997-a387-2dd81ed9bcd1 You'll also need these additional controlnets for SDXL: huggingface.co/stabilityai/control-lora/tree/main/control-LoRAs-rank128 Have fun and don't hesitate to reach out here if you run into any issues!

    • @RuinDweller
      @RuinDweller 3 หลายเดือนก่อน

      @@HowDoTutorials I thought I had already responded to this, but apparently I didn't! Anyway THANK YOU for posting the link to that workflow! It's running, but I can't get it to colorize any more, which was my main use for it. :( Oh well, it can still edit B/W images, and then I can colorize them in the other workflow, but I would love to be able to do both things in one. I can colorize things, but not people. I've tried every conceivable prompt. :(

    • @HowDoTutorials
      @HowDoTutorials 3 หลายเดือนก่อน

      @@RuinDweller I've been having trouble getting it to work as well. Seems there's something about SDXL that doesn't play with that use case quite as well. I'll keep at it and let you know if I figure something out.

  • @jroc6745
    @jroc6745 4 หลายเดือนก่อน

    This looks great thanks for sharing. How can this be altered for img2img?

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      Here's a modified workflow: comfyworkflows.com/workflows/cd47fbe6-68cc-4f40-8646-dfc62d32eeb4

  • @mikrodizels
    @mikrodizels 4 หลายเดือนก่อน

    That FaceDetailer looks amazing, I like creating images with multiple people in them, so faces are the bane of my existence

    • @amorgan5844
      @amorgan5844 4 หลายเดือนก่อน

      Its the most discouraging part of making ai art

    • @greypsyche5255
      @greypsyche5255 3 หลายเดือนก่อน

      try hands.

  • @MultiSunix
    @MultiSunix 4 หลายเดือนก่อน

    This is great and helpful, thank you!

  • @teenudahiya01
    @teenudahiya01 4 หลายเดือนก่อน

    hi can you help me to solve this error "module diffuser has no attribute StableCascadeUnet. i installed cascade install in stable diffusion but i got this error after installing of all model on window 11

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      It sounds like your diffusers package may be out of date. If you haven’t already, try updating ComfyUI. If you have the Windows portable install you can go into ComfyUI_windows_portable/update folder and run `update_comfyui_and_python_dependencies.bat`.

  • @97BuckeyeGuy
    @97BuckeyeGuy 4 หลายเดือนก่อน

    You have an interesting cadence to your speech. Is this a real voice or AI?

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      A bit of both. I record the narration with my real voice, edit out the spaces and ums (mostly), and then pass it through ElevenLabs speech to speech.

    • @97BuckeyeGuy
      @97BuckeyeGuy 4 หลายเดือนก่อน

      @@HowDoTutorials That explains why I kept going back and forth with my opinion on this. Thank you 👍🏼

    • @lukeovermind
      @lukeovermind หลายเดือนก่อน

      @@HowDoTutorials Thats very clever. its a very soothing voice

  • @jocg9168
    @jocg9168 4 หลายเดือนก่อน

    Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.

  • @JonDankworth
    @JonDankworth 4 หลายเดือนก่อน

    Stable Cascade takes too long only to create images that are not truly better

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      I agree for the most part. There are a few things it can do better than other models without special nodes, such as text and higher resolutions, but in general I think its strengths won’t really show until some fine tunes come out. That said, given its current licensing and the upcoming SD3 release, that may not matter much either.

  • @AngryApple
    @AngryApple 4 หลายเดือนก่อน

    would a Lightning Model be a plug and play replacement for this, just because of the different License

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.

    • @AngryApple
      @AngryApple 4 หลายเดือนก่อน

      @@HowDoTutorials I will try it, thanks

  • @JefHarrisnation
    @JefHarrisnation 4 หลายเดือนก่อน

    This was a huge help, especially showing where the models go. Running smooth and producing some very nice results.

  • @kamruzzamanuzzal3764
    @kamruzzamanuzzal3764 4 หลายเดือนก่อน

    SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it. EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.

  • @rovi-farmiigranhermanodela8693
    @rovi-farmiigranhermanodela8693 4 หลายเดือนก่อน

    What about all that videos where they use inpainting tools to edit pictures or to aplay "filters" wich AI can do that?

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      You can do that with ComfyUI too, though in-painting can be done a bit more easily with AUTOMATIC1111. I don’t have a video covering in-painting yet, but this method can give you something like the “filters” you mentioned: Reimagine Any Image in ComfyUI th-cam.com/video/CRURtIltf58/w-d-xo.html

  • @AkoZoom
    @AkoZoom 4 หลายเดือนก่อน

    very easy step by step tuto ! thank you ! But my rtx3060 12Go takes about near 2min for the 4 images and the last (which has en H special) is different also (?)

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      You may want to try using the lite models or adjusting the resolution down to 1024x1024 to improve generation speed. You may also have better luck using the new models specifically for ComfyUI. Here's an updated tutorial: th-cam.com/video/GOnMXejA8Fc/w-d-xo.html

    • @AkoZoom
      @AkoZoom 4 หลายเดือนก่อน

      . @HowDoTutorials Oh yep ty ! then the models are no more in UNet folder.. but in regular checkpoints folder.

  • @kamruzzamanuzzal3764
    @kamruzzamanuzzal3764 4 หลายเดือนก่อน

    question, what happened to stable cascade stage a (VAE)? I don't see it edit: ok got the answer, another person already asked it before Anyway, subscribed. cause not many ppl experimenting with stable cascade and sharing their findings like you

  • @WalidDingsdale
    @WalidDingsdale 4 หลายเดือนก่อน

    I really have not figured out the applicablility of cascade yet, thanks for sharing this all the same

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      I’ve noticed its biggest strengths are composition and text while still allowing variety in output. There are some great fine tunes for SDXL out there that offer better composition for certain styles, but can be more limited in their breadth. Honestly though, I think the main upside of Stable Cascade is not the current checkpoint, but the method and how it allows for creating fine tunes at a reduced cost.

  • @andriiB_UA
    @andriiB_UA 4 หลายเดือนก่อน

    Where is vae "stage_a"? or is this not necessary?

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      Not necessary as a separate model for this method. It’s been baked in as the VAE of the stage b checkpoint for the ComfyUI-specific models.

  • @TinusvdMerwe
    @TinusvdMerwe 4 หลายเดือนก่อน

    Fantastic, I appreciate the time taken to detail explain some concepts and the general easy , unhurried tone

  • @Vectorr66
    @Vectorr66 4 หลายเดือนก่อน

    Are you on discord?

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      Not currently, but it's probably about time for me to make an account and get on there. 😅

  • @Vectorr66
    @Vectorr66 4 หลายเดือนก่อน

    I do wish you can make the noodles less noticable ha

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      Usually I'll adjust it for myself to make things look cleaner, but it makes it harder to see what the connections are so I switch it to ultimate noodle mode for videos. You can change it by clicking the gear in the menu to the right and switching the Link Render mode.

  • @Vectorr66
    @Vectorr66 4 หลายเดือนก่อน

    I do agree with the overbaked look.

  • @Vectorr66
    @Vectorr66 4 หลายเดือนก่อน

    Thanks!

  • @Metalman750BC
    @Metalman750BC 4 หลายเดือนก่อน

    Excellent! Great explanation.

  • @KarlitoStudio
    @KarlitoStudio 4 หลายเดือนก่อน

    thanks... is there a workflow to fix faces and hands with cascade?🤗

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      I've been working on one for faces that I'll be sharing in an upcoming video. In the meantime, explore this node (github.com/mav-rik/facerestore_cf) in combination with HiRes fix. For hands, you might try HandRefinder (github.com/wenquanlu/HandRefiner) which is included with the controlnex_aut preprocessors for ComfyUI (github.com/Fannovel16/comfyui_controlnet_aux).

  • @Vectorr66
    @Vectorr66 4 หลายเดือนก่อน

    I see there is an update, is there a video coming for that? Do we still need this workflow that is on here for comfy? Sorry I am new to comfy and just curious. Thanks!

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      I’m actually editing the video right now! It’ll cover the new method as well as some techniques I’ve discovered. Should be up by this evening. 😁

    • @Vectorr66
      @Vectorr66 4 หลายเดือนก่อน

      @@HowDoTutorials thanks!

  • @entertainmentchannel9632
    @entertainmentchannel9632 4 หลายเดือนก่อน

    I get this error(AttributeError 16 KSamplerAdvanced): ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1688, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'ModuleList' object has no attribute '1'

    • @HowDoTutorials
      @HowDoTutorials 4 หลายเดือนก่อน

      It's likely you are attempting to use an SDXL checkpoint with SD1.5 control nets. To fix, either switch to a sd-1.5 based checkpoint or use control net models for SDXL. You can find links to the SDXL controlnets here: huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/controlnet_sdxl