Hand FIXING Controlnet - MeshGraphormer

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ม.ค. 2024
  • MeshGraphormer is hand FIXING for ControlNet. I Build a Workflow for ComfyUI for you and will explain step by step how it works.
    👐 Elevate your digital artwork with Graphormer's advanced Depthmap generation, providing lifelike and realistic hand anatomy. 🎨 This tool opens up a world of possibilities for correct hands and expressive hand gestures
    #### Links from the Videoi ####
    my Workflow: openart.ai/workflows/NkhzwEW8...
    ControlNet Aux: github.com/Fannovel16/comfyui...
    Hand Inpaint Model: huggingface.co/hr16/ControlNe...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
    AI Newsletter: oliviotutorials.podia.com/new...
    Support me on Patreon: / sarikas
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 230

  • @Douchebagus
    @Douchebagus 4 หลายเดือนก่อน +3

    Hey, Olivio, been watching your videos for a while now. I just wanted to say what an absolute help your guides are and I am thankful that your channel exists. I tried comfyui a few months ago but I gave up because of large learning curve, but with your help, I'm not only cruising through it but learning faster than I thought possible.

  • @goldholder8131
    @goldholder8131 4 หลายเดือนก่อน +1

    The way you articulate how the nodes relate to each other is just fantastic. And your workflows is a fantastic place to learn about the flow of processing all these complex things. Messing with variables here and there makes it a fun scientific project. Thanks again!

  • @BrunoBissig
    @BrunoBissig 4 หลายเดือนก่อน +1

    Hi Olivio, thats simply great, thank you very much for the workflow!

  • @henrischomacker6097
    @henrischomacker6097 5 หลายเดือนก่อน

    Excellent video! Congratulations.

  • @vincentmilane
    @vincentmilane 4 หลายเดือนก่อน +7

    ERROR : When loading the graph, the following node types were not found:
    AV_ControlNetPreprocessor
    Nodes that have failed to load will show as red on the graph.
    I tried many things, always pop up

    • @caffeinezombies
      @caffeinezombies 4 หลายเดือนก่อน

      I still have this issue as well, after following many suggestions on installing other items.

    • @user-ln7ti5ki5z
      @user-ln7ti5ki5z 3 หลายเดือนก่อน

      Same here

    • @user-ln7ti5ki5z
      @user-ln7ti5ki5z 3 หลายเดือนก่อน +1

      I solved this issue by opening the manager and then clicking "Install Missing Custom Nodes"

    • @jakubjakubjakubjakubjakub
      @jakubjakubjakubjakubjakub 2 หลายเดือนก่อน

      @@user-ln7ti5ki5z That works! Thank you!

  • @GrocksterRox
    @GrocksterRox 5 หลายเดือนก่อน

    Very creative as always Olivio!!!

  • @nikgrid
    @nikgrid 3 หลายเดือนก่อน

    Thanks Olivio..excellent tutorial

  • @daviddiehn5176
    @daviddiehn5176 4 หลายเดือนก่อน +4

    Hey Olivio, I intergrated the mask captioning etc. to my workflow, but now the same error is occuring everytime. I tried a bit a round, but I am still clueless.
    Error occurred when executing KSampler:
    mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320)
    ... ( 100 lines complex code )

  • @knightride9635
    @knightride9635 5 หลายเดือนก่อน +1

    Thanks ! A lot of work went into this. Happy new year

  • @seraphin01
    @seraphin01 5 หลายเดือนก่อน +4

    awesome video, it was still a struggle even with controlnet to fix those pesky hands.. gonna give it a try with this setup, you're amazing, happy new year!

  • @76abbath
    @76abbath 5 หลายเดือนก่อน

    Thanks a lot for the video Olivio!

  • @jenda3322
    @jenda3322 5 หลายเดือนก่อน

    Jako vždy, jsou vaše videa fantastická ok.👍👏

  • @skycladsquirrel
    @skycladsquirrel 5 หลายเดือนก่อน +1

    Great job Olivio! Let's give you a five finger hand of applause!

  • @hippotizer
    @hippotizer 4 หลายเดือนก่อน

    Super valuable video, thanks a lot!

  • @mosske
    @mosske 5 หลายเดือนก่อน

    Thank you so much Olivio. Love your videos! 😊

  • @diegopons9808
    @diegopons9808 5 หลายเดือนก่อน +10

    Hey! Available on A1111 as well?

  • @fingerprint8479
    @fingerprint8479 5 หลายเดือนก่อน +9

    Hi, works with A1111?

  • @amkire65
    @amkire65 5 หลายเดือนก่อน +3

    Great video. I find that the depth map looks a lot better than the hand in the finished image, I'm not too sure why it changes quite so much. It's cool that we're getting closer, though... what I'm really after is a way to get consistent clothing in multiple images so I don't have a character that changes clothes in every panel of a story.

  • @micbab-vg2mu
    @micbab-vg2mu 5 หลายเดือนก่อน

    Very useful - thank you.

  • @maxfxgr
    @maxfxgr 5 หลายเดือนก่อน

    Hello and have an awesome 2024

  • @Ulayo
    @Ulayo 5 หลายเดือนก่อน +16

    A little late comment, but you don't need to do a vae decode -> encode. There's a node called "Remove latent noise mask" that removes the mask so you can keep working on the same latent. (Every time you go between latent and pixel space you lose a little quality, as the decode/encode process is not lossless).
    Also, you would probably get a little less sausage like hands if you lowered the denoise a bit to somewhere in the 0.7-0.9 area.

    • @zoybean
      @zoybean 4 หลายเดือนก่อน

      But then it doesn't show an image output so how would I do that for the midas preprocessor step?

    • @Ulayo
      @Ulayo 4 หลายเดือนก่อน +2

      @@zoybean You still decode the latent to get an image for the midas step. Just connect that same latent to a remove noise mask and pass that to the upscale latent node.

    • @beatemero6718
      @beatemero6718 4 หลายเดือนก่อน

      I dont quite understand. You need the decode to pass the Image to the meshgraphormer. The remove Noise mask has only a latent in and output, so how would not need the decode?

    • @Ulayo
      @Ulayo 4 หลายเดือนก่อน +2

      @@beatemero6718 I may have worded my reply a bit wrong. You still need to decode the latent to get an image that you pass to the preprocessor. But you shouldn't encode that image again. Just add a remove latent noise mask to the same latent and send it to the sampler.

    • @beatemero6718
      @beatemero6718 4 หลายเดือนก่อน

      @@Ulayo I got you.

  • @lauracamellini7999
    @lauracamellini7999 5 หลายเดือนก่อน

    Thanks so much olivio!

  • @RamonGuthrie
    @RamonGuthrie 5 หลายเดือนก่อน +5

    This might be your most liked video ever ...Hand FIXING the Holygrail of AI

    • @rileyxxxx
      @rileyxxxx 5 หลายเดือนก่อน

      xD

  • @hwj8640
    @hwj8640 5 หลายเดือนก่อน

    Thanks for sharing!

  • @ysy69
    @ysy69 5 หลายเดือนก่อน

    happy new year olivio!

  • @HolidayAtHome
    @HolidayAtHome 5 หลายเดือนก่อน +1

    That's great! Would love to see some examples of more complicated hand positions or hands that are partly covered by some objects. Does it still work then or is it unusable in those scenarios ?

  • @user-fu5sz4su8u
    @user-fu5sz4su8u 5 หลายเดือนก่อน +5

    Can I use this in A1111????

  • @hatuey6326
    @hatuey6326 5 หลายเดือนก่อน

    great tuto as always ! i would like to see how it works on img to img and with sdxl !

  • @michail_777
    @michail_777 5 หลายเดือนก่อน

    That's great! Now let's do the tests:)))

  • @BenjaminKellner
    @BenjaminKellner 5 หลายเดือนก่อน +12

    Instead of VAE decode and encode before your latent upscale, you could use a use 'get latent size' node, create an empty mask injecting width/height as input, and apply a blank mask as the new latent mask. Especially with larger images it will save you time versus going through the VAE pipeline, but also, since the VAE encoding/decoding is a lossy process, you actually lose quality between samples (not that an upscaled latent looks any better unless done iteratively) -- I prefer to upscale in pixelspace, then denoise starting at step 42, ending at step 52, then another sample after that from step 52 to carry me to 64 steps. I find three samples before post is my optimal workflow.

    • @user-wi7vz2io5n
      @user-wi7vz2io5n 5 หลายเดือนก่อน +2

      Excellent. Where can I find your optimal workflow to learn from you? Thank you

    • @tetsuooshima832
      @tetsuooshima832 2 หลายเดือนก่อน

      @@user-wi7vz2io5n hahahaha

  • @sb6934
    @sb6934 5 หลายเดือนก่อน

    Thanks!

  • @Gabriecielo
    @Gabriecielo 5 หลายเดือนก่อน +1

    Thanks for the tutorial and the result is amazing, save a lot of photoshop time. I found there are several limitations too. It focus on fingers fixing, but if two right hands for same person, this model seems not fixing it, may be I didn't find the right way to tune it? And it's a SD15 only could not work with SDXL checkpoints for now, hope it gets updated later.

  • @jcvijr
    @jcvijr 5 หลายเดือนก่อน +1

    Thank you! This model could be included in adetailer node, to simplify the process..

  • @BackyardTattoo
    @BackyardTattoo 5 หลายเดือนก่อน +5

    Hi Olivo, thanks for the video.
    How can apply the workflow to an imported image? Is it possible?

  • @weebtraveller
    @weebtraveller 5 หลายเดือนก่อน

    thank you very much, great as always. Can you do Ultimate SD Upscale instead?

  • @Zbig-xw6yp
    @Zbig-xw6yp 5 หลายเดือนก่อน

    Great video. Please note that Preprocess is requiring Node "Segment Anything" for some reason and without it can not be loaded! Thank You for sharing!

  • @user-db1rv4ou4l
    @user-db1rv4ou4l 5 หลายเดือนก่อน +3

    would be nice if you had an sdxl version

  • @markdkberry
    @markdkberry หลายเดือนก่อน +1

    I get some weird module = 1 error and it wont go past the last KSampler maybe because "ControlNet Auxillary PreProcessors" has this message in the manager so this doesnt work for me : "NOTE: Please refrain from using the controlnet preprocessor alongside this installation, as it may lead to conflicts and prevent proper recognition."

  • @ImAlecPonce
    @ImAlecPonce 5 หลายเดือนก่อน

    Thanks :) I'm going to try sticking an img to img to it right away XD

  • @97BuckeyeGuy
    @97BuckeyeGuy 5 หลายเดือนก่อน +3

    I wish you would do more work with SDXL models. I want to see some of the workarounds that may be out there for the lack of a Tiled ControlNet. And I'd like to see more about Kohya Shrink with SDXL.

    • @OlivioSarikas
      @OlivioSarikas  5 หลายเดือนก่อน +2

      Yes, I really need to do more sdxl. But personally I never use it for my Ai images, because it takes much longer and I don't need the added benefits

    • @EH21UTB
      @EH21UTB 5 หลายเดือนก่อน +2

      @@OlivioSarikas Also interested in SDXL. Isn't there a way to use this new hands tool to generate the depth mask and then apply with SDXL models?

    • @Steamrick
      @Steamrick 4 หลายเดือนก่อน

      @@EH21UTB Of course. There's SDXL depth controlnets available, though they're not specifically trained for hands. You'd have to experiment which of the available ones works best.

  • @ooiirraa
    @ooiirraa 5 หลายเดือนก่อน +4

    Thank you for the new ideas! I think it can be improved a little bit. Every encode goes with a loss of quality, so it might be a better decision to first create the full rectangular mask with the dimensions of the image and then apply the new mask to the latent without reencoding. ❤ thank you for your work!

    • @cchance
      @cchance 5 หลายเดือนก่อน +1

      Ya was gonna say don’t decode and recode just overwrite the mask

    • @Foolsjoker
      @Foolsjoker 5 หลายเดือนก่อน +2

      @@cchance How would you just overwrite the mask without decoding to 'flatten' the image?

    • @Madwand99
      @Madwand99 5 หลายเดือนก่อน +1

      @@cchance Do you have another workflow to show what you mean by this?

  • @UltraStyle-AI
    @UltraStyle-AI 5 หลายเดือนก่อน +3

    Can't find any info about it yet. Need to install on A1111.

  • @bluemurloc5896
    @bluemurloc5896 5 หลายเดือนก่อน +2

    great video, would you please consider making a tutorial for automatic 1111?

    • @BabylonBaller
      @BabylonBaller 4 หลายเดือนก่อน +1

      Yea, feels like all he posts about is Comfy and forgetting about the 90% of the industry that uses Automatic1111.

  • @abellos
    @abellos 5 หลายเดือนก่อน +6

    Fantastic, can be used also in automatic1111?

    • @mirek190
      @mirek190 5 หลายเดือนก่อน +1

      lol

    • @sharezhade
      @sharezhade 5 หลายเดือนก่อน +8

      Need a video about that. Comfy-ui seems so complicated

    • @AirwolfPL
      @AirwolfPL 4 หลายเดือนก่อน

      @@sharezhade it's not complicated and offers great control of the process but it's horribly time consuming. A1111 offers much more streamlined experience for me.

  • @kleber1983
    @kleber1983 4 หลายเดือนก่อน

    does the controlnet is really necessary? I´ve achieved the same result by passing the meshgraphormer mask through a VAE encode for impainting and it worked, I think it´s simplier, but I wonder if it compromisses the quality... thx.

  • @ryutaro765
    @ryutaro765 3 หลายเดือนก่อน +1

    Can we also use this refined method for img2img?

  • @TheHmmka
    @TheHmmka 3 หลายเดือนก่อน +1

    How to fix next error?
    When loading the graph, the following node types were not found:
    AV_ControlNetPreprocessor
    Nodes that have failed to load will show as red on the graph.

    • @user-ln7ti5ki5z
      @user-ln7ti5ki5z 3 หลายเดือนก่อน

      I solved this issue by opening the manager and then clicking "Install Missing Custom Nodes"

  • @Not4Talent_AI
    @Not4Talent_AI 5 หลายเดือนก่อน +2

    Pretty cool! Does it work well with hands in more complex positions? Like someone flicking a marble (random example).

    • @Rasukix
      @Rasukix 5 หลายเดือนก่อน +1

      hello there

    • @Steamrick
      @Steamrick 5 หลายเดือนก่อน +5

      Try it out and let us know

    • @Not4Talent_AI
      @Not4Talent_AI 5 หลายเดือนก่อน

      sup!1 hahhaa@@Rasukix

    • @Not4Talent_AI
      @Not4Talent_AI 5 หลายเดือนก่อน

      dont have comfy installed atm@@Steamrick

  • @4thObserver
    @4thObserver 5 หลายเดือนก่อน +2

    I really hope they streamline this process in future iterations. MeshGraphormer seems very promising but I lost track of what each step and process does 6 minutes into the video.

    • @meadow-maker
      @meadow-maker 5 หลายเดือนก่อน +1

      Yeah I couldn't even load the Mesh Graphormer node at first, it took me several breaks, coffees and redo until I found it. Really shoddy training video.

  • @ImmacHn
    @ImmacHn 5 หลายเดือนก่อน +1

    1:30 you can update the custom nodes instead of uninstalling then reinstalling, in the manager press "Fetch updates" once the the updates are fetched Comfy will prompt you to open the "Install Custom Nodes" at which point the custom nodes that have updates will show an "Update" button. After that restart comfy and refresh the page.

    • @OlivioSarikas
      @OlivioSarikas  5 หลายเดือนก่อน +1

      I know. But when I updated it, it didn't give me the new preprocessor

    • @ImmacHn
      @ImmacHn 5 หลายเดือนก่อน +2

      @@OlivioSarikas I see, did you refresh the page after? The nodes are basically client sided so you would need to reload after the reset to see the new node

    • @ImmacHn
      @ImmacHn 5 หลายเดือนก่อน +1

      @@OlivioSarikas Also thanks for the videos, they're very helpful!

    • @Kryptonic83
      @Kryptonic83 5 หลายเดือนก่อน +4

      yeah, i hit update all in comfyui manager then fully restarted comfyui and refreshed the page, worked for me without reinstalling the extension.

  • @hmmrm
    @hmmrm 5 หลายเดือนก่อน

    hello, , i have tried to reach you on discord but i couldnt, i wanted to ask you a very important question.. once we upload our workflows in open ai .. we cant delete any of the workflows ? why ?

  • @jbnrusnya_should_be_punished
    @jbnrusnya_should_be_punished 12 วันที่ผ่านมา

    I got a strange error: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (row 1 column 5) .Even "Install Missing Custom Nodes" does not help.

  • @listahul2944
    @listahul2944 5 หลายเดือนก่อน

    Great! thanks for the video. how about a img to img fix hands workflow.

    • @OlivioSarikas
      @OlivioSarikas  5 หลายเดือนก่อน

      It's inpainting, so that should work too

    • @TheDocPixel
      @TheDocPixel 5 หลายเดือนก่อน +1

      Technically... this is img2img. Just delete the front parts that generate the picture, and start by adding your own picture with a Load Image node.

  • @Rasukix
    @Rasukix 5 หลายเดือนก่อน +7

    I presume this is usable with a1111 also?

    • @GS195
      @GS195 5 หลายเดือนก่อน +2

      Oh I hope so

    • @ImmacHn
      @ImmacHn 5 หลายเดือนก่อน +2

      Should really try going the Comfy route, it might seem overwhelming at first, but it's amazing once you get the hang of it.

    • @Rasukix
      @Rasukix 5 หลายเดือนก่อน

      I just find nodes hard to handle, my brain just doesn't work well with it@@ImmacHn

  • @PeoresnadaStudio
    @PeoresnadaStudio 5 หลายเดือนก่อน

    i would like to see more result examples :)

    • @OlivioSarikas
      @OlivioSarikas  5 หลายเดือนก่อน +1

      You can create as many as you want with my workflow. But I know what you mean 🙂

    • @PeoresnadaStudio
      @PeoresnadaStudio 5 หลายเดือนก่อน

      @@OlivioSarikas i mean, it's nice to see more samples on general... thanks for your videos, they are great!

  • @megadarthvader
    @megadarthvader 5 หลายเดือนก่อน +2

    Isn't there a simplified version for web ui? 😅 With that concept map style system everything looks so complicated 🥶

  • @alucard604
    @alucard604 5 หลายเดือนก่อน +1

    Any idea why my "comfyui-art-venture" custom nodes have an "import failed" issue? Its is required by this workflow for the "ControlNet Preprocessor". I already made sure that all conflicting custom nodes are uninstalled.

    • @2PeteShakur
      @2PeteShakur 5 หลายเดือนก่อน

      same issue, u updated comfyui?

    • @caffeinezombies
      @caffeinezombies 4 หลายเดือนก่อน

      Same issue

  • @randomVimes
    @randomVimes 5 หลายเดือนก่อน

    One suggestion for vids like this: a section at the end which shows 3 example prompts and results. Prompt can be on screen, dont have to read it out

  • @josesimoes1516
    @josesimoes1516 5 หลายเดือนก่อน +1

    If anyone else has an error that 'mediapipe' module can't be found and can't install package due to OSError or something like that just uninstall the auxiliary processor nodes, reboot comfy, install again, reboot again and it works. Everything was fully updated when I was getting that error so reinstalling is probably the best choice just to avoid annoyances.

  • @duck-tube6786
    @duck-tube6786 5 หลายเดือนก่อน +50

    Olivio, by all means continue with ComfyUI vids but please also include A1111 as well.

    • @happyme7055
      @happyme7055 5 หลายเดือนก่อน +11

      Yes, please Olivio! For me as a hobby AI creator, A1111 is the better solution because it's not nearly as complicated to install/operate...

    • @cchance
      @cchance 5 หลายเดือนก่อน +4

      A111 plugins come slower these days and stuff like this in a1111 just isn’t as easy he’s doing 3 ksamplers masking and other stuff in a specific order. That’s just not how a111 works at least not easily

    • @sierradesigns2012
      @sierradesigns2012 5 หลายเดือนก่อน +1

      Yes please!

    • @joeterzio7175
      @joeterzio7175 5 หลายเดือนก่อน +12

      I see ComfyUI and I stop watching. It's obsolete already and that workflow looks like a complex wiring diagram. The future of AI image generation is going to be text based, not that mess of spaghetti string.

    • @fritt_wastaken
      @fritt_wastaken 5 หลายเดือนก่อน +8

      @@joeterzio7175 text based is the past of AI image generation. And it won't come back until something like chatgpt can understand you perfectly and use that "spaghetti string" for you. And even then you probably would have to intervene if you're not just goofing around and actually creating something.
      There is absolutely no way to describe everything required for an image using just text

  • @HeinleinShinobu
    @HeinleinShinobu 5 หลายเดือนก่อน +1

    cannot install controlnet preprocessor, has this error Conflicted Nodes:
    ColorCorrect [ComfyUI-post-processing-nodes], ColorBlend [stability-ComfyUI-nodes], SDXLPromptStyler [ComfyUI-Eagle-PNGInfo], SDXLPromptStyler [sdxl_prompt_styler]

  • @Okratron-rr8we
    @Okratron-rr8we 5 หลายเดือนก่อน

    i tried replacing the first ksampler with a load image node so that i could process an already generated image through, but it just skipped the mesh graphormer node entirely. any tips? i also plugged the load image into a vae encoder for the second ksampler.

    • @Okratron-rr8we
      @Okratron-rr8we 5 หลายเดือนก่อน

      nvm, the mesh graphormer simply isnt able to detect the hands in the image i'm using. maybe soon there will be a way to increase its detectability. other than that, this works great!

    • @listahul2944
      @listahul2944 5 หลายเดือนก่อน

      @@Okratron-rr8we I'm just starting with comfy so forgive me is there is some mistake
      What I did: Created a "Load image" node and connected it to the "meshgraphormer hand refiner" create a "VAE encoder" node and connect the same "Load image" to it... that VAE encoder I connected to the "set latent noise mask". Also, using it like this there is sometimes it isn't able to detect the hands in the image i'm using.

    • @Okratron-rr8we
      @Okratron-rr8we 4 หลายเดือนก่อน

      @@listahul2944 yep, thats exactly what i did also. im sure there is a way to identify the hands for the ai but im new to this also. thanks for trying though

  • @VladimirBelous
    @VladimirBelous 4 หลายเดือนก่อน

    I made a workflow for improving the face using a depth map, I would like to link to this process the improvement of hands using a depth map, as well as the process of enlarging with detail without losing quality. For me it turns out either soapy or pixelated around the edges.

  • @SetMeFree
    @SetMeFree หลายเดือนก่อน

    when i do img2img it changes my original image into a cartoon but fixes the hands. Any advice?

  • @fabiotgarcia2
    @fabiotgarcia2 3 หลายเดือนก่อน

    Hi Olivio!
    How can we apply this workflow to an imported image? Is it possible?

  • @AnimeDiff_
    @AnimeDiff_ 4 หลายเดือนก่อน

    segs preprocessor?

  • @KINGLIFERISM
    @KINGLIFERISM 5 หลายเดือนก่อน +2

    In Darth Vader's voice, " the circle is com-plete." I am now wondering if SEGS could be used instead of a huge box. It can mess up a face if the hand is close to it. Any ideas guys?

  • @graphilia7
    @graphilia7 5 หลายเดือนก่อน +1

    Thanks!
    I have a problem when I launch the Workflow, this warning appears: "the following node types were not found: AV_ControlNetPreprocessor"
    I downloaded and placed the "ControlNet-HandRefiner-pruned" file in this folder: ComfyUI_windows_portable\ComfyUI\models\controlnet.
    Can you please tell me how to fix this?

    • @sirdrak
      @sirdrak 5 หลายเดือนก่อน +2

      Same here... I tried uninstalling and reinstalling the custom nodes as said in the video but the error persists. Edit: Solved intallling Art Venture custom nodes, but now i have the problem of 'mediapipe' error with MeshGraphormer-DepthMapPreprocessor node...

    • @birdfingers354
      @birdfingers354 4 หลายเดือนก่อน

      Me three

    • @caffeinezombies
      @caffeinezombies 4 หลายเดือนก่อน

      ​@@sirdrakI looked for art venture custom nodes and couldn't find anything.

    • @notanemoprog
      @notanemoprog 3 หลายเดือนก่อน

      I replaced the workflow "ControlNet Preprocessor" used in the video (from that "venture" package I don't have) with "AIO Aux Preprocessor" selecting "MiDas DepthMap" and got at least the first image produced (bad hands) before further problems happened

  • @TheColonelJJ
    @TheColonelJJ 2 หลายเดือนก่อน

    Can we add this to Forge?

  • @1ststepmedia105
    @1ststepmedia105 5 หลายเดือนก่อน

    I keep getting an error message, the workflow stops at the MeshGraphormer-DepthMapPreprocessor window. I followed the direction you gave and have downloaded the hand inpaint model and place it in the folder not luck.

    • @jamiesonsidoti
      @jamiesonsidoti 5 หลายเดือนก่อน

      Same... hits the MeshGraphormer node and coughs the error: A Message class can only inherit from Message
      Getting the same error when attempting to use the Load InsightFace node for ComnfyUI_IPAdapter_Plus. Tried on a separate new install of Comfy and the error persists.

  • @RhapsHayden
    @RhapsHayden 15 วันที่ผ่านมา

    Have you managed to get consistent hand animations yet?

  • @androidgamerxc
    @androidgamerxc 5 หลายเดือนก่อน +3

    im automatic 1111 squad please tell how to add in that

  • @A42yearoldARAB
    @A42yearoldARAB 3 หลายเดือนก่อน

    Is there an automatic 1111 version of this?

  • @BVLVI
    @BVLVI 5 หลายเดือนก่อน

    what keeps me from using comfy UI is the models folder. I want to keep it in a1111 but I can't seem to figure out how to make it point to that folder.

    • @OlivioSarikas
      @OlivioSarikas  5 หลายเดือนก่อน +1

      There is a yaml file in the comfy folder called extra_model_paths. Most likely your version ends in ".example" remove that to make it a yaml file and add the A1111 folder

    • @SaschaFuchs
      @SaschaFuchs 4 หลายเดือนก่อน +2

      What Olivio has written or symlinks. That's how I did it, because I put all the loras and checkpoints on an external SSD, they are connected with symlinks. I do the same with the output folders, they run together on one folder using symlinks.

  • @substandard649
    @substandard649 5 หลายเดือนก่อน +2

    Interesting.... does it work with SDXL too?

    • @rodrimora
      @rodrimora 5 หลายเดือนก่อน +1

      would like to know too

    • @Steamrick
      @Steamrick 5 หลายเดือนก่อน +1

      The controlnet is clearly made for SD1.5. That said, there's no reason you could not combine the depth map output with a SDXL depth controlnet, though it may not work quite as well as a net specifically trained for hands.

    • @TheP3NGU1N
      @TheP3NGU1N 5 หลายเดือนก่อน

      Sd1.5 always comes first.. SDXL will probably be next as they usually require a little extra to get worked out.

    • @substandard649
      @substandard649 5 หลายเดือนก่อน

      I thought sd15 was officially deprecated, if so then you would expect sdxl to be the first target for new releases. That being said i get way better results from the older model, XL is so inflexible by comparison...rant over 😀

    • @TheP3NGU1N
      @TheP3NGU1N 5 หลายเดือนก่อน

      Depends on what you are going for. In the realm of realism, sd15 is still king to most people. Tho xl is quickly catching up.
      Programming wise, sd15 is easier and most of the time, if you get it to work for sd15 getting it to work for xl is going to be much easier, the reverse isn't quite the same@@substandard649

  • @D3coify
    @D3coify 3 หลายเดือนก่อน

    I'm trying to do this with "load Image" node

  • @V_2077
    @V_2077 หลายเดือนก่อน +1

    Anybody know an sdxl controlnet refiner for this?

  • @haoshiangyu6906
    @haoshiangyu6906 5 หลายเดือนก่อน

    Add Krita +comfy work flow. Please! I see a lot of video that combines the 2 and line to see how you use it

  • @News_n_Dine
    @News_n_Dine 5 หลายเดือนก่อน

    Unfortunately I don't have the device requirement to set up comfyui. Please do you have any advice for me?

    • @News_n_Dine
      @News_n_Dine 5 หลายเดือนก่อน

      Btw, I already tried google colab, didn't work

  • @vbtaro-englishchannel
    @vbtaro-englishchannel 2 หลายเดือนก่อน

    It’s awesome but I can’t use meshgraphormer node. I don’t know why. I guess it’s because I’m using Mac.

  • @hurricanepirates8602
    @hurricanepirates8602 5 หลายเดือนก่อน +1

    Why is AV_ControlNetPreprocessor node red? Egadz!

  • @omegablast2002
    @omegablast2002 5 หลายเดือนก่อน

    only for comfy?

  • @MiraPloy
    @MiraPloy 4 หลายเดือนก่อน

    Couldn't dwpose or openpose do the same thing?

  • @tutmstudio
    @tutmstudio 2 หลายเดือนก่อน

    The hand is calibrated to some extent, but the end result is different face. But the face is different in the end result. Can't you do the same face?

  • @adamcarskaddan
    @adamcarskaddan 4 หลายเดือนก่อน

    I don't have the controlnet preprocessor. How do if fix this?

    • @user-ln7ti5ki5z
      @user-ln7ti5ki5z 3 หลายเดือนก่อน

      Try opening the manager and then clicking "Install Missing Custom Nodes" and reboot

  • @sergetheijspartner2005
    @sergetheijspartner2005 4 วันที่ผ่านมา

    Maybe make a "perfect human"-workflow, I have seen separate workflows for face-detailing, skin, hands, eyes, feet.....maybe I just want to click qeue prompt once and I want my humanoid figure to be perfect in the end without building a workflow for every part of the human body

  • @NamikMamedov
    @NamikMamedov 5 หลายเดือนก่อน

    How can we fix hands in automatic 1111?

  • @wykydytron
    @wykydytron 5 หลายเดือนก่อน +3

    A1111 all the way, noodles are for eating not computers.

  • @truth_and_raids3404
    @truth_and_raids3404 4 หลายเดือนก่อน

    I can't get this to work ,every time I get an error
    Error occurred when executing MeshGraphormer-DepthMapPreprocessor:
    [Errno 2] No such file or directory: 'C:\\Users\\AShea\\Downloads\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/ControlNet-HandRefiner-pruned\\cache\\models--hr16--ControlNet-HandRefiner-pruned\\blobs\\41ed675bcd1f4f4b62a49bad64901f08f8b67ed744b715da87738f926dae685c.incomplete'

  • @ttul
    @ttul 5 หลายเดือนก่อน

    Hmmm. The mask still being in the latent batch output is something that should be fixed.

  • @andrewq7125
    @andrewq7125 5 หลายเดือนก่อน

    Wait for SDXL

  • @vincentmilane
    @vincentmilane 4 หลายเดือนก่อน

    ERROR : (IMPORT FAILED) comfyui-art-venture
    How to fix ?

    • @notanemoprog
      @notanemoprog 3 หลายเดือนก่อน

      If you don't have that "venture" package I guess it is possible to replace the workflow "ControlNet Preprocessor" with "AIO Aux Preprocessor" selecting "MiDas DepthMap"

  • @wzs920
    @wzs920 5 หลายเดือนก่อน +3

    does it work for a1111?

    • @OlivioSarikas
      @OlivioSarikas  5 หลายเดือนก่อน

      I will check. But new tech almost always comes to comfyui first

  • @Ruslan4564
    @Ruslan4564 5 หลายเดือนก่อน

    Also you can use simple Midas Depth map instead ComfyUI's ControlNet Auxiliary Preprocessors

    • @user-ln7ti5ki5z
      @user-ln7ti5ki5z 3 หลายเดือนก่อน

      Maybe try opening the manager and then clicking "Install Missing Custom Nodes" and reboot

  • @Konrad162
    @Konrad162 หลายเดือนก่อน

    isn't an open pose better?

  • @DashtonPeccia
    @DashtonPeccia 5 หลายเดือนก่อน +4

    I'm sure this is a novice mistake, but I am getting AV_ControlNetPreprocessor node type missing even after completely uninstalling and re-installing the Controlnet Aux Preprocessor. Anyone else getting this?

    • @kasoleg
      @kasoleg 5 หลายเดือนก่อน

      I have the same case, help

    • @KonoShunkan
      @KonoShunkan 5 หลายเดือนก่อน +1

      That is a different set of custom nodes to the aux controlnet nodes. It's called comfyui-art-venture (AV = Art Venture) and can be installed via Comfyui Manager. You may also need control_depth-fp16 safetensors model from Hugging Face.

    • @2PeteShakur
      @2PeteShakur 5 หลายเดือนก่อน

      @@KonoShunkan getting conflicts with comfyui-art-venture, disabled the conflicted nodes, still issue,,,

    • @Madwand99
      @Madwand99 5 หลายเดือนก่อน

      I'm getting this error too, I haven't figured it out yet.

    • @notanemoprog
      @notanemoprog 3 หลายเดือนก่อน

      Because the one featured in the video and workflow is _not_ in "comfyui_controlnet_aux-main" which most people have but in another "venture" package, so if I understood the point of that node, I suppose the same result can be produced by replacing the workflow "ControlNet Preprocessor" used in the video (from that "venture" package I don't have) with "AIO Aux Preprocessor" selecting "MiDas DepthMap" and got at least the first image produced (bad hands) before further problems happened

  • @cokuzaklar
    @cokuzaklar หลายเดือนก่อน

    thanks for the video, but i am sure there are faster easier ways to tackle this issue

    • @OlivioSarikas
      @OlivioSarikas  หลายเดือนก่อน

      let me know if you find any. that said, you can also use negative embeddings, but they are not addressing specific hands, they instead aim to create better hands in the first place. but they might also alter the overall look of yout image

  • @kanall103
    @kanall103 4 หลายเดือนก่อน +1

    I stoped to watch 0:45 lol

  • @toonleap
    @toonleap 5 หลายเดือนก่อน +4

    No love for AUTOMATIC1111?

  • @Ultimum
    @Ultimum 5 หลายเดือนก่อน

    Is there something similar for Stable diffusion?

    • @beatemero6718
      @beatemero6718 5 หลายเดือนก่อน +2

      What do you mean? Bro, this IS stable diffusion.

    • @Ultimum
      @Ultimum 5 หลายเดือนก่อน

      @@beatemero6718 Nope thats ComfyUI

    • @sharezhade
      @sharezhade 5 หลายเดือนก่อน

      I think means automatic 1111, cos comfy-ui its so complicated for some users @@beatemero6718

    • @notanemoprog
      @notanemoprog 3 หลายเดือนก่อน

      Probably means one of the text user interfaces like A1111 or similar@@beatemero6718