ComfyUI - Hands are finally FIXED! This solution works with all models!

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 พ.ย. 2024

ความคิดเห็น • 229

  • @joelface
    @joelface 10 หลายเดือนก่อน +14

    So cool that this works! Love the ingenuity that it must have taken to figure this all out.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      It was a bit of a pain to watch if you check out the live stream from last Saturday. That seed was the major issue.

  • @gingercholo
    @gingercholo 9 หลายเดือนก่อน +5

    super specific use case, when the subjects hands are literally like the image your using, if not the depth maps it comes up with a straight trash.

  • @lennoyl
    @lennoyl 10 หลายเดือนก่อน +3

    Thanks for all your videos.
    I was a little lost with all those nodes versions but, now, I'm starting to understand better how to use Comfyui

  • @mistraelify
    @mistraelify 9 หลายเดือนก่อน +1

    Well, that works fine with big hands but not very good with like 3-4 characters in the picture and little hands, closed hands, specific poses. Somewhat the MeshGrapher gives bad results. But it's definitely this path to use for correcting details without altering too much of the original seed. I'm impressed how that works.

  • @gimperita3035
    @gimperita3035 10 หลายเดือนก่อน +10

    So grateful I 'm starting to understand how things flow in Comfy UI without feeling too lost. It sounded like Chinese to me a couple of months ago. Now it's like German. Still rough but somehow familiar. 😆 Thank you for this!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Glad it was helpful!

    • @furiousnotch7914
      @furiousnotch7914 10 หลายเดือนก่อน

      ​@@sedetweiler I just wanted to know, what's the minimum system requirements for running comfyUi smoothly, without any problem?
      Appreciate you 🙂

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      Probably 4gb of vram.

    • @furiousnotch7914
      @furiousnotch7914 10 หลายเดือนก่อน

      @@sedetweiler I've tried with 4GB VRAM and, 16GB RAM.. it takes 2:16 hours to generate and upscale 1 image.
      RTX 4060 8GB VRAM with 16GB RAM ✌️or RTX 3060 12GB VRAM with 16GB RAM✌️or RTX 3060 8GB VRAM with 16GB VRAM✌️.... (I have i7 12th gen) Which one do you prefer between these three?
      Don't know which one would be the best for faster image generation and upscaling....
      Thanks for your earlier response 🙂

    • @Renzsu
      @Renzsu 10 หลายเดือนก่อน +2

      @@furiousnotch7914 VRAM takes priority, the more the better. Then think about the speed of the card. The new 4070 Super seems to be a happy middle ground of the latest generation. Smaller budget? 4060 Ti 16 Gb. Bigger budget? Think 4080 Super or 4090. Of the 30 series, I would take the fastest one with at least 16 Gb. But honestly, I would save up a bit more and go straight to 40 series.

  • @nixonmanuel6459
    @nixonmanuel6459 3 หลายเดือนก่อน

    Thank you immensely! Now I just need the hand detailer and face detailer to combine into a final image.

  • @byxlettera1452
    @byxlettera1452 6 หลายเดือนก่อน +2

    The best video I have seen so far. Very clear and it gets to the point. Nothing to add. Thanks

  • @RichGallant
    @RichGallant 10 หลายเดือนก่อน +3

    Hi That is very cool,and works well for me. Once again your explanations are clear and very simple to follow. As an old guy who learns best by reading these are great.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Great to hear!

  • @lumbagomason
    @lumbagomason 10 หลายเดือนก่อน +3

    One more thing that you can do is send the final image to fooocus image prompt > inpaint>improve face, hands (2nd option), paint both hands, and use the quick prompt called detailed hand.
    Edit: This is AFTER you have refined the hands using the above tutorial.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      Thanks for sharing!

    • @b4ngo540
      @b4ngo540 10 หลายเดือนก่อน +5

      @@sedetweiler this sounds interesting, if you tested this and you think it's effective, we would love to see a part 2 of this video doing these extra steps for a perfect result

  • @rickandmortyunofficial8986
    @rickandmortyunofficial8986 10 หลายเดือนก่อน +2

    Thank you for making a tutorial by building nodes manually, it really helps clarify each function of the nodes, unlike other channels which present workflows with ready-made nodes

  • @divye.ruhela
    @divye.ruhela 5 หลายเดือนก่อน +1

    Love this! Have learnt a lot from this entire playlist, thanks!

  • @gab1159
    @gab1159 10 หลายเดือนก่อน +3

    Awesome man, trying this now, your tutorials are great and easy to follow. A godsend!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Glad I could help!

  • @fabiotgarcia2
    @fabiotgarcia2 9 หลายเดือนก่อน +6

    Hi Scott! First of all I want to congrats you for yours amazing tutorials. Thank you!! Could you please create another version of this workflow where instead use prompt to create an image we will upload an image?

  • @geoffphillips5293
    @geoffphillips5293 หลายเดือนก่อน

    This is super useful and your patient explaining is much better than many of the rushed jobs on youtube. However I found the meshgraphormer rather disappointing, especially where you've several people. Segment anything Ultrav2 seems to work quite well though (with a prompt of hands)

  • @murphylanga
    @murphylanga 8 หลายเดือนก่อน +1

    Thanks for your video. You can use the global seed if you set the seed in an extra primitive node and fix it

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      Cool, thanks

  • @Prasanth-yb1bg
    @Prasanth-yb1bg 2 หลายเดือนก่อน

    The way of explaning is awesome. You earn a sub.

  • @oldmanliving
    @oldmanliving 10 หลายเดือนก่อน +23

    Please try different hand poses you will know it never fix hand. When the ControlNet depth preprocessor gives you bad depth hand, you will still get bad hand. Even it gives you good preprocessed depth hand, for different hand poses it will still generate flip, or reverse bad hand. I am so sorry to tell the true.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      It isn't perfect, but again this works in 90% of the situations where we get bad hands.

    • @DanielWoj
      @DanielWoj 8 หลายเดือนก่อน +2

      I would say that it improves 50% in the photo-like images. But maybe 10-20% in painting or some low CFG styles.

    • @poipoi300
      @poipoi300 2 หลายเดือนก่อน +1

      @@sedetweiler Lol that's wishful thinking. It only works well when the hands are prominent in the image and in a common pose, not interacting with anything. It ends up messing up perfectly good hands more than fixing bad ones.

    • @poipoi300
      @poipoi300 22 วันที่ผ่านมา

      @@matthewfuller9760 Models are getting better and soon bad hands will be extremely rare. That said, yes you can redraw images or pose models with depth information. I redraw and edit, then inpaint the area with a low denoise.

    • @poipoi300
      @poipoi300 22 วันที่ผ่านมา

      @@matthewfuller9760 You can also model the object. As for drawing, you just need to know how to draw.

  • @JolieKim2795
    @JolieKim2795 9 หลายเดือนก่อน +2

    It cannot be completely eradicated. Only post processing with pts AI can help, and sometimes when the hand is quite stable , it may provide an additional glove or piece of steel on the hand,

  • @davidm8505
    @davidm8505 10 หลายเดือนก่อน +1

    This is great. Thank you for making it so clear and simple. Would you happen to have any videos on maintaining consistency of characters across multiple renders? Many situations require more than just one shot of a character but I find consistency almost impossible to achieve just by text alone.

  • @lmbits1047
    @lmbits1047 9 หลายเดือนก่อน +4

    For some reason the hands from the picture I am trying this on don't get detected. I guess this method only works for hands that are already clear enough they are hands.

  • @kietzi
    @kietzi 8 หลายเดือนก่อน

    Very nice tutorial. Looks like compositing^^ so as a comp-artist, i love this workflow :)

  • @pihva_rusni
    @pihva_rusni 6 หลายเดือนก่อน +1

    I would like to try it, but I can't see the workflow attached here or in the community tab. Although I'm not sure if it will work due to hardware limitations (rx580) and software differences (sd 1.5, torch, nodes).

  • @lesserdak
    @lesserdak 10 หลายเดือนก่อน +2

    I made it to 4:30 and then nothing shows up in Controlnet Models, "undefined". I went to manager > install custom nodes > Fannovel16 which says "NOTE: Please refrain from using the controlnet preprocessor alongside this installation, as it may lead to conflicts and prevent proper recognition." Not sure how to proceed. Is my ComfyUI installation bad?

    • @LGCYBeats
      @LGCYBeats 10 หลายเดือนก่อน +2

      Same here, not sure what im doing wrong

  • @Enu_Vibe
    @Enu_Vibe 10 หลายเดือนก่อน +1

    I use to enjoy your mid journey tutorials and workflow. Can I ask why you stopped? Now that the models are even more powerful, i wish we can turn to expert like you.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +2

      I guess I just need to make some. I have a few ideas on them they I have not seen covered. Thank you for the suggestion!

  • @christosmak.6741
    @christosmak.6741 4 หลายเดือนก่อน

    Thanks for this. I see this video is now 5 months old. When attempting to install the Controlnet preprocessors in the latest version of ComfyUI, there is the following warning in red:
    NOTE: Please refrain from using the controlnet preprocessor alongside this installation, as it may lead to conflicts and prevent proper recognition.
    What do you advise?

    • @christosmak.6741
      @christosmak.6741 4 หลายเดือนก่อน

      I have now tried this method and it still messes up the hands big time.

  • @ai_materials
    @ai_materials 9 หลายเดือนก่อน

    Thank you for all the useful information!☺

  • @ThoughtFission
    @ThoughtFission 7 หลายเดือนก่อน +1

    Hey Scott, really suprised you're not ahead of the curve with somethging about a SD3 howto.

  • @Marcus_Ramour
    @Marcus_Ramour 9 หลายเดือนก่อน

    very clear and well explained, many thanks for sharing!

  • @BabylonBaller
    @BabylonBaller 10 หลายเดือนก่อน +1

    Great vid, thanks Scott. Guys, if your using A1111.. It takes just two clicks to enable Hand Refiner in Controlnet and fix hands lol But the noodles are much more fun, if you have time to kill.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      The difference for me is I know how it works. with much of A1111 you check a box and the magic happens. With Comfy you actually control and learn how it all goes together. It is the difference from just eating in a restaurant and knowing how to cook as well.

    • @traugdor
      @traugdor 10 หลายเดือนก่อน +1

      The Hand Refiner in ControlNet isn't as powerful as the fine control you have in ComfyUI. One-button solutions always have issues. I've used both and always get better results with ComfyUI.

  • @atomicchewbacca1663
    @atomicchewbacca1663 7 หลายเดือนก่อน +1

    I got to the point where the Meshgraphormer is added to the Ui , however all it generates is a black box. I installed the comfyui manager and such. Are there some videos I should go back and watch before trying the methods in this video?

  • @bryanbondoc-y4s
    @bryanbondoc-y4s 12 วันที่ผ่านมา

    did you have a hand refiner without going through KSAMPLER again?, the first image created was always different to the 2nd

  • @DrunkenKnight71
    @DrunkenKnight71 10 หลายเดือนก่อน

    thank you! i'm getting great results using this...

  • @preecefirefox
    @preecefirefox 10 หลายเดือนก่อน +1

    Great video, thanks for making it! Have you tried it with a person holding something? I’m wondering how well it works if part of the hand is meant to be not visible 🤔

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Not sure, but it is worth trying!

  • @ggenovez
    @ggenovez 5 หลายเดือนก่อน

    AWESOME video. Quick question. where do you get the depth files for the load control module. Newbie here

  • @dmarcogalleries254
    @dmarcogalleries254 6 หลายเดือนก่อน +1

    Can you next time go more on SD3 Creative upscaler? IK don't find much info on it. So you don't use it with a 2k image? it sats 1000 or less? I'm trying to figure out if it is worth it at 25 cents per upscale. Thanks!

  • @Lunarsong.
    @Lunarsong. 10 หลายเดือนก่อน +2

    This may be a dumb question but does this process also work for cartoon/anime models?

  • @SuperFunHappyTobi
    @SuperFunHappyTobi 8 หลายเดือนก่อน +2

    I am getting an error
    "Error occurred when executing KSampler:
    mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320)"
    Does anyone know what this is? am running the inpaint depth hand control model that is recommended on the github
    Seems to be an error with the Ksample

  • @drozd1415
    @drozd1415 9 หลายเดือนก่อน +1

    Do you have any solution if im getting "new(): expected key in DispatchKeySet(CPU, CUDA, HIP, XLA, MPS, IPU, XPU, HPU, Lazy, Meta) but got: PrivateUse1" error while using MeshGraphormer?
    PS. Greate video, i just would make it run on my amd pc xD

  • @jeffreytarqawitzbathwaterv3086
    @jeffreytarqawitzbathwaterv3086 3 หลายเดือนก่อน

    Finally! FLUX is the future!

  • @ysy69
    @ysy69 7 หลายเดือนก่อน

    Thanks for this video. Have you tried to see if this works with SDXL workflows?

  • @bronsonvdbroeck
    @bronsonvdbroeck 7 หลายเดือนก่อน +2

    The control net model doesn't work with an amd setup, save the time homies.

  • @potusuk
    @potusuk 10 หลายเดือนก่อน

    Nice follow up, thanks Scott

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Any time!

  • @VintageForYou
    @VintageForYou 3 หลายเดือนก่อน

    Grate video how can I grab the controlnet from the manager.

  • @AndyErhard51
    @AndyErhard51 4 หลายเดือนก่อน +1

    Have you seen, that it messes up the background when you choose the large boxes „original“ ?

    • @sumitsonawane7945
      @sumitsonawane7945 10 วันที่ผ่านมา

      same here, have you got any solution ?

  • @tomasm1233
    @tomasm1233 7 หลายเดือนก่อน +1

    HI Scott. IS it possible to use ComfyUI to do inpainting on the pre-existing image?

  • @IrrealKIM
    @IrrealKIM 9 หลายเดือนก่อน

    Thank you, that works perfectly!

    • @sedetweiler
      @sedetweiler  9 หลายเดือนก่อน

      Glad it helped!

  • @pixelhusten
    @pixelhusten 10 หลายเดือนก่อน +2

    It doesn't work with every model either. Graphormer has its problems with hands that originate from 2D, 2.5D models. Apparently the depth information that Graphormer needs to recognise that they are fingers is missing.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      So far I have had great luck with it, even using non-AI images as starting points. I think it is a pretty flexible tool.

  • @grafik_elefant
    @grafik_elefant 10 หลายเดือนก่อน

    Wonderful! Thanks for sharing! 👍

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Thank you! Cheers!

  • @dannyvfilms
    @dannyvfilms 10 หลายเดือนก่อน

    Great stuff! Do you know if there’s a community node for Invoke for this? I’m not sure how interchangeable or inter-compatible the nodes are.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      I don't know. I love the Invoke project for a lot of reasons, but I just have not used it lately as I live in comfy most of the day.

  • @ragoutvideo1
    @ragoutvideo1 หลายเดือนก่อน

    Just tried that with flux model and a flux control net … sadly not getting any good result on the second image. Its just some random glitch stuff put on top of the old image.

  • @maxfxgr
    @maxfxgr 10 หลายเดือนก่อน

    Amazing video! Learnt so much from this Scott! A new random question arises, what's the name of the plugin that gives you info on which node is executed at runtime on the top left? :)

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      That is from the PythonGoSsssss pack.

  • @teambellavsteamalice
    @teambellavsteamalice 7 หลายเดือนก่อน

    Is there a way to split the image to background and person, fix the hands and then recombine?
    Maybe also model a the pose (body and hands), so any animation of that can be done very precise and consistent?

  • @lemonZzzzs
    @lemonZzzzs 10 หลายเดือนก่อน +1

    now that's pretty cool!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      I am loving it!

  • @radiantraptor
    @radiantraptor 7 หลายเดือนก่อน +1

    I can't figure out how to make this work. Even if the MeshGraphormer produces good results and the hands look nice in the depth map, the hands in the final image often look worse than in the image before MeshGraphormer. It seems that the second KSampler does mess up the hands again. Is there anything to avoid this?

    • @sedetweiler
      @sedetweiler  7 หลายเดือนก่อน

      you can always use a different model for the 2nd sampler. Be sure you use a different seed! That was one I tripped over.

    • @chucklesb
      @chucklesb 6 หลายเดือนก่อน

      @@sedetweiler wish this helped. I'm using the same model you are in the video and it just makes it worse.

  • @JuniorS.18
    @JuniorS.18 3 หลายเดือนก่อน

    Hi!! in my version of confy ui i don't have the Manager button why is that? .so i can incorporate nodes.

  • @junenightingale695
    @junenightingale695 19 วันที่ผ่านมา

    How do I get to see your live streams?

  • @risewithgrace
    @risewithgrace 10 หลายเดือนก่อน

    Thank you! Can you share how to do this with moving hands in a video?

    • @b4ngo540
      @b4ngo540 10 หลายเดือนก่อน

      use the "image batch to image list" node as input for this hand fixer

  • @MultiSunix
    @MultiSunix 8 หลายเดือนก่อน

    Even I used Juggernaut model, but if I change the style to comic, this workflow doesn't work, the Meshgraphormer could not identify hand in comic style. Any suggestion if I do need to fix hands in comics?

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      I am not sure, as I don't really do a lot of comics. I think the Swiss7 model or something similar was helpful there.

    • @MultiSunix
      @MultiSunix 8 หลายเดือนก่อน

      @@sedetweiler Thanks, will give it a try.

  • @korinlifshits8780
    @korinlifshits8780 8 หลายเดือนก่อน

    hi. Great content. Thank you . where is the workflow json for the video? thank you

    • @sedetweiler
      @sedetweiler  8 หลายเดือนก่อน

      They are in the community tab here on TH-cam. That is the only method they give us for communication, unfortunately. Thank you for supporting the channel!

  • @Shirakawa2007
    @Shirakawa2007 10 หลายเดือนก่อน

    Thank you very much for this!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      You're very welcome!

  • @MrVovsn
    @MrVovsn 10 หลายเดือนก่อน

    Thanks for the tutorial!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      You are welcome! Thanks for taking the time to leave a comment. Cheers!

  • @scottownbey9340
    @scottownbey9340 10 หลายเดือนก่อน

    Scott great stuff! I ran into some snags applying this to a workflow with 2 other controlnets ( Depth + Openpose) Im not using Advanced contronet for the other 2 and 1 Ksampler. Do I need 2 Ksamplers like your video?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      The first one creates the flawed image, and the graphformer can then spot the hands and the second sampler fixes them. So, I am using 2 samplers for that reason. Because this works so well with just depth, I am not throwing all the controlnets at it, as it just works as is quite often.

    • @scottownbey9340
      @scottownbey9340 10 หลายเดือนก่อน +1

      I got my workflow to work with one KSampler using 1.5 model ( Im using Controlnet for the body (DWopenpose + Depth) and now MeshGraphomer) and got to that point where i generated great hands but the image totally changed , so I added the Set Latent noise mask with samples going into a empty latent image (replacing the one from the KSampler) and the image is totally gone. So frustrating as i was almost there.. Any guidance would be appreciated

    • @scottownbey9340
      @scottownbey9340 10 หลายเดือนก่อน +1

      Got it working! thanks@@sedetweiler

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      awesome! it sounded like you were SO close! that is great news!

    • @RhapsHayden
      @RhapsHayden 6 หลายเดือนก่อน

      ​@@scottownbey9340did you end up adding another ksampler or staying with one?

  • @Comenta-san
    @Comenta-san 10 หลายเดือนก่อน

    😯so simple. I love ComfyUI

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      It really is, for such a terrible issue. Cheers!

  • @rakly347
    @rakly347 8 หลายเดือนก่อน

    How do you make it so you don't see the 2 squares in your end image where it repainted the hands? can even see them in your youtube video.

  • @lilillllii246
    @lilillllii246 10 หลายเดือนก่อน

    Hello! Is there a way to integrate two json files with different functions in comfyui? One is to do the inpaint function, and the other is to maintain a consistent character through faceid, but I'm having trouble linking the two.

  • @BrunoBissig
    @BrunoBissig 10 หลายเดือนก่อน

    Hi Scott, thanks for the update. I'm also trying this with img2img but I can't get it to work propperly. Maybe an idea for another video?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      Sure! that should be as simple as replacing the empty latent with a VAE Encoded image and use the samples off of that.

    • @BrunoBissig
      @BrunoBissig 10 หลายเดือนก่อน

      Hi Scott, now it works. I think my input image was not the right choice for that. I changed it now to the girl in your video with six fingers as input, and now its fixed and i get five fingers. Thanks! @@sedetweiler

  • @T86k35
    @T86k35 5 หลายเดือนก่อน +1

    Hands are finally fixed but what about the rest of the body in SD3. You did say you were head of quality assurance at Stable diffusion, are you all hiding under a table there?

  • @AlIguana
    @AlIguana 10 หลายเดือนก่อน

    amazing! i couldn't get it to work though, it won't detect the hands (the "display mask" box is just a black square every time, and i can't work out why). still.. something to work on :)

  • @Gradashy
    @Gradashy 5 หลายเดือนก่อน

    I have installed the ControlNet but that node not appears to me

  • @wootoon
    @wootoon 8 หลายเดือนก่อน

    I can use it normally under the SD1.5 model, but I always get an error when I use the SDXL model.

  • @hunhs
    @hunhs 10 หลายเดือนก่อน

    Good job!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Thank you! Cheers!

  • @Sly6311
    @Sly6311 หลายเดือนก่อน

    when i run the preview of the MeshGraphormer it give me this error:
    MeshGraphormer-DepthMapPreprocessor
    shape '[1, 9]' is invalid for input of size 0
    couldn't find a solution for it online can anyone help ? i followed each step exactly same resolutions and everything AMD gpu thu

  • @michaspringphul
    @michaspringphul 9 หลายเดือนก่อน

    does that work for all kind of hand positions? e.g. hands grabbing a handle, hands tipping on a keyboard or piano, hands clapping ....

  • @alexmehler6765
    @alexmehler6765 6 หลายเดือนก่อน

    does it also work on hands which dont wave directly at the camera or for cartoon models ? i dont think so

  • @paultsoro3104
    @paultsoro3104 8 หลายเดือนก่อน

    can this handle an image of a couple holding hands? Thanks. its impossible in Krita and Firefly I tried it already..

  • @alexanderschlosser7987
    @alexanderschlosser7987 10 หลายเดือนก่อน

    Thank you so much for another amazing tutorial!
    I’m trying to figure out what the best way is to combine this with the refiner. Would I go through both the base and the refiner for the full image first, and then do base and refiner again for only the hands?
    I tried something like that, but the results are not that great as the hands don’t really match the visual quality of the rest of the picture.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      I would refine at the very end.

    • @alexanderschlosser7987
      @alexanderschlosser7987 10 หลายเดือนก่อน

      @@sedetweiler Refine everything together you mean? How would you do that if you want to do 80% of the processing in the base and 20% in the refiner? Fix the hands even with some noise of the base left?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +2

      yup. that is what I would do. Since the position of the fingers is probably already determined by that time, additional refinement isn't going to undo that.

    • @alexanderschlosser7987
      @alexanderschlosser7987 10 หลายเดือนก่อน

      Thank you, I really appreciate your input!

  • @marcihuppi
    @marcihuppi 10 หลายเดือนก่อน

    i clicked update all in the manager, now my comfy doesn't work anymore. i get this error:
    raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    any ideas how to solve this? everything worked fine before the update

  • @XERTIUS
    @XERTIUS 4 หลายเดือนก่อน

    just noticed the sampler he used , isnt that one meant for like art/digitalart/3D models AND Euler_Ancestral is best for realistic images or am i missing something here?

  • @Stage_Leap
    @Stage_Leap 10 หลายเดือนก่อน

    Can you share a downloadable workflow for this

  • @RussellThomason
    @RussellThomason 9 หลายเดือนก่อน

    This only seems to detect 1 set of hands even when there's multiple people and it doesn't detect parts of fingers or hands that are occluded. And there is very often noticeable artifacts around the bounding boxes themselves even if the hands are done well. Any ideas how to refine this?

  • @s.r.9423
    @s.r.9423 3 หลายเดือนก่อน

    Hej Scott :) i cant find this graph in the community area. Did you delete it?

  • @NotThatOlivia
    @NotThatOlivia 10 หลายเดือนก่อน

    HUGE THANKS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      sure! happy Friday!

  • @Catapumblamblam
    @Catapumblamblam 10 หลายเดือนก่อน +1

    my controlnet model list is empity snd I can't find where to download them

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      If you go to the git for any node suite by clicking on the name in the manager, it will tell you what additional files or models are needed and where to get them.

    • @Catapumblamblam
      @Catapumblamblam 10 หลายเดือนก่อน +1

      @@sedetweiler @ 4:20 when you are selecting your model in the controlnet list, you are full of models, my list is empity!

    • @Catapumblamblam
      @Catapumblamblam 10 หลายเดือนก่อน

      @@sedetweiler and, another question: Is it working on text video?

    • @gelisob
      @gelisob 8 หลายเดือนก่อน

      same, "load controlnet model" box list empty. Did get mesh things when installing fannovel16 pack but that list is empty.. continuing to loo for answer.

  • @keylanoslokj1806
    @keylanoslokj1806 10 หลายเดือนก่อน

    How do you get this level of control though with colab notebooks and python code?

  • @Seffyzero
    @Seffyzero 6 หลายเดือนก่อน +1

    Bold choice, spending 5 minutes setting up nodes you explicitly tell us not to do, only to have those nodes be required steps in the tutorial.

  • @BiancaMatsuo
    @BiancaMatsuo 7 หลายเดือนก่อน

    Is this possible to be done with other WebUi, like Forge WebUI?

  • @zdvvisual
    @zdvvisual 10 หลายเดือนก่อน

    Hi thank you for this idea, but i had problem. i generated 3 persons but the refiner only got 1 person hand left and right, the second and third person's hands are not detected. So i only fixed one person hand. What is the problem here?

  • @cstar666
    @cstar666 9 หลายเดือนก่อน

    Is there anything similar in the works for FEET?!

  • @INVICTUSSOLIS
    @INVICTUSSOLIS 9 หลายเดือนก่อน

    Hi Scott, any new videos? Theres some new stuff we need to learn

  • @RamonGuthrie
    @RamonGuthrie 10 หลายเดือนก่อน

    Thanks for this video

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      Most welcome

  • @xaiyeon_xiuzhen
    @xaiyeon_xiuzhen 10 หลายเดือนก่อน

    ty for sharing !

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      My pleasure!

  • @kleber1983
    @kleber1983 10 หลายเดือนก่อน

    Funny how I have the proper controlnet installed but I don´t have this specific one for hands.... What am I doing wrong? thx.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      check that you are up-to-date and restarted.

  • @V_2077
    @V_2077 5 หลายเดือนก่อน

    My hands aren't being detected it's iust a black preview. My image is a person with hands on hips

  • @SLAMINGKICKS
    @SLAMINGKICKS 10 หลายเดือนก่อน

    I have two GPU's how do make sure comfyui is using the most powerful of the two nvidea cards.

  • @beatemero6718
    @beatemero6718 10 หลายเดือนก่อน

    The meshgraphormer puts put only a black Image. I have everything installed and updated. Any help?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      Hmm, is it not seeing the hands at all? If they are really messed up, it will not see them. I would just check the mask to see if it found them.

    • @beatemero6718
      @beatemero6718 10 หลายเดือนก่อน

      @@sedetweiler i tested it again with a simple prompt of a waving woman, using empty latent Image and a resolution of 832x1216 (using a custom sdxl merge) and it works fine. The First Time I tried I did img2img of a stylized toon character which output hands already look quite alright. However the meshgraphormer refuses to recognize the hands of said character.

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      It might not be good with cartoons. Not sure, I don't tend to go for that type of artwork personally.

    • @beatemero6718
      @beatemero6718 10 หลายเดือนก่อน

      @@sedetweiler yeah, thats what I expected and it seems to be the case. It doesnt properly recognize cartoony proportions, even though in my opinion cartoony hands come out better in general, due to the fact that they are bigger and give stable diffusion more space to generate them a bit better.

    • @dflfd
      @dflfd 10 หลายเดือนก่อน

      @@beatemero6718 maybe you could try DWPose?

  • @ryzikx
    @ryzikx 10 หลายเดือนก่อน

    all models! nice

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน

      cheers!

  • @salturpasta6204
    @salturpasta6204 10 หลายเดือนก่อน

    Not trying to sound facetious here but surely it would be far less of a ballache just to photoshop extra fingers out, far quicker 🤷‍♂️

    • @DaemonJax
      @DaemonJax 9 หลายเดือนก่อน +1

      Yeah, those original image hands were already pretty great -- I'd just fix it in photoshop. I guess this method is fine for people with ZERO artistic ability.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 10 หลายเดือนก่อน

    Hi Scott, where is the wf please?

    • @sedetweiler
      @sedetweiler  10 หลายเดือนก่อน +1

      wf? sorry, not sure I follow.

    • @___x__x_r___xa__x_____f______
      @___x__x_r___xa__x_____f______ 10 หลายเดือนก่อน

      @@sedetweiler that’s ok, I tedter your workflow for graphormer

  • @velRic
    @velRic 14 วันที่ผ่านมา

    but how to set the number of the fingers for humanoids or the creatures? anyone knows? your tutorial working great but only with the humans

  • @lumbagomason
    @lumbagomason 10 หลายเดือนก่อน

    Thanks man