🎨✨ Mastering Character Poses with ControlNet and Layers in ComfyUI!

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ก.ย. 2024
  • 🔍✨ Transform your AI art with precise character posing using Layers & ControlNet in ComfyUI! Whether you're an experienced artist or just starting out, this video will guide you through the techniques to pose your characters exactly as you envision. Discover how to use these powerful tools to customize your art to perfection. Watch now and take your creative skills to the next level! 🚀🖌️
    * Support Quality Content: www.paypal.com...
    * 1 on 1 Personalized AI Training / Support Session (Web Launch 15% Off!): www.grockster....
    * Top 10 SDXL Model Leaderboard - docs.google.co...
    -- Chat Share Have Fun on Discord - / discord --
    RESOURCES
    * Workflow and Install Instructions - civitai.com/mo...
    * Quick Posing
    - Mixamo - www.mixamo.com/
    - PoserManiacs - www.posemaniac...
    - PoseMyArt - www.posemy.art/
    * Mistoline line controlnet (and TEED line preprocessor)
    - Mistoline: huggingface.co...
    - Teed: Controlnet Aux
    KEY STABLE DIFFUSION COMFYUI TOPICS DISCUSSED
    * Updated Loader Workflow with Layers
    * Controlnet Basics
    * Free Posing Websites to influence your characters
    * Controlnet Depth and Depth Anything
    * Controlnet LineArt using TEED
    #AIArt #ComfyUI #Layers #ControlNet #CharacterPosing #DigitalArt #ArtTutorial #AIArtist #CreativeTech #ArtCommunity #TechArt #AIInnovation #DigitalCreativity #ArtisticAI #AIArtistry #ArtProcess #LearnAIArt #PoseYourCharacters #TailoredArt #VisualArt #ArtisticExpression #CreativeAI #ArtTech #InnovationInArt

ความคิดเห็น • 35

  • @ArrowKnow
    @ArrowKnow 4 หลายเดือนก่อน +2

    Whenever I see a new video from you I know I'm about to get schooled! Always an improvement to my workflows and being able to use comfyui! Thanks

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน +1

      This comment made my day, thank you so much and enjoy!

  • @nowhereman22
    @nowhereman22 หลายเดือนก่อน +1

    That's. Absolutely. Amazing. Please, just keep going. And may all the gods bless you for awesome work.

    • @GrocksterRox
      @GrocksterRox  หลายเดือนก่อน

      Love this feedback, thanks so much!

  • @tonon_AI
    @tonon_AI 4 หลายเดือนก่อน +1

    love the bird sounds on the background!
    awesome as always

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน

      Thank you very much! I wish I can claim it was AI, but just noisy birds by my window 😁

  • @onewiththefreaks3664
    @onewiththefreaks3664 4 หลายเดือนก่อน +1

    Thank you, I've been eagerly waiting for a new video!

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน

      I'm so glad, hope it's helpful and here to help in any way possible :)

  • @prestonrussell1452
    @prestonrussell1452 3 หลายเดือนก่อน +1

    What magic is this?? Are really generating everything that fast or is the video edited? What kind of computer do you have? Also, amazing tutorial, best I've seen, helped me understand so much that other videos sort of skipped over expecting me to just understand.

    • @GrocksterRox
      @GrocksterRox  3 หลายเดือนก่อน

      Thank you so much! Most of the time I have my workflow ready to go (to ensure that I have the best result to help everyone out), though I do other elements of the videos live during recording. As for computer, it's a middle-of-the-road desktop (originally a 12GB VRAM video card though now it's 24GB). I really appreciate this great feedback and thanks so much for sharing videos and the channel with others! :)

  • @captaingabi
    @captaingabi 14 วันที่ผ่านมา +1

    But how you make sure you are posing the SAME character?

    • @GrocksterRox
      @GrocksterRox  14 วันที่ผ่านมา

      in your prompting you can be very specific so the model or lora pulls out the same subject imagery, or you can use IP Adapter on top of specific areas with pre-rendered images to ensure the key subject qualities come through (or a mix of both techniques)

  • @aliyilmaz852
    @aliyilmaz852 4 หลายเดือนก่อน +1

    thanks a lot, you are not just teaching the technical , but also how to use technology to create some artistic stuff. I love it!
    Do you have any plan to show us how to play with colors of generation? Or generally looking from color aspect

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน +1

      My pleasure! When you say colors of generation, can you give an example of what you're looking for?

    • @aliyilmaz852
      @aliyilmaz852 4 หลายเดือนก่อน +1

      @@GrocksterRox I am just a noob, I just wanted to adjust colors of the images (pastel, vibrant, saturated..). I like your taste and would be good to learn from you at least a litte.

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน +1

      Got it, super easy solution for you - just download the Allor custom node and connect your image to the ImageEffectsAdjustment. Enjoy!

    • @aliyilmaz852
      @aliyilmaz852 4 หลายเดือนก่อน +1

      @@GrocksterRox thanks a lot!

  • @spacekitt.n
    @spacekitt.n 4 หลายเดือนก่อน +1

    daz is better for this and they give you a default figure that actually has the proportions of a real human

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน

      This is interesting, thanks. Is it free? It seems to be additional software you have to download/install? I went for the quickest/easiest win for this video.

    • @Axiassart
      @Axiassart 4 หลายเดือนก่อน +1

      @@GrocksterRoxDaz is free

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน

      I'll check it out, thanks!

  • @SejalDatta-l9u
    @SejalDatta-l9u 2 หลายเดือนก่อน +1

    Great video.
    Could you please clarify on a few points: (I'm using your template BTW to learn - thanks again).
    With reference to the first 5mins of the video.
    1. Everytime I SAMdetect mask
    - it creates a border
    - after auto-detecting, the mask bleeds around the edge of the item that I want to select for the foreground
    - after I masked the main object, it projects the inverse (e.g. background) of what I selected to the previews (bookmark 2)
    2. Every time I generate the previews (bookmark 2), it de-selects the masks that I did on the layer generation (e.g. fairy, monster) from the previous generation. In the previews, this effectively means that the full generated pic without masking is overlayed onto the background.
    How can I overcome these problems.
    Am I missing a step?

    • @GrocksterRox
      @GrocksterRox  2 หลายเดือนก่อน +1

      Thanks so much for your comments, here's a few pointers:
      * Agreed on the border, and I'll usually go in with mask editor and just paint/mask out the edges (it is an extra step but thankfully only a few seconds)
      * the auto-detection bleeding - you can address that by increasing the precision on the slider (move it to the right). That should hopefully reduce the amount it guesses incorrectly to include
      * For any of your layered elements, I would ensure that your seed is fixed. A guess would be that you're sampler is generating a new version (which would destroy the previous mask) and that of course then destroys the layering you've set up.
      Hope some of these are helpful and good luck! Happy to chat about them more or even work through in a training session if needed/desired.

    • @SejalDatta-l9u
      @SejalDatta-l9u 2 หลายเดือนก่อน +1

      @GrocksterRox spectacular!
      Thanks for the quick reply. I'll try this today.
      My slider doesn't seem to have any effect on the bleeding though. I thought it would. Any other ideas?
      Keep up the great work

    • @GrocksterRox
      @GrocksterRox  2 หลายเดือนก่อน

      @@SejalDatta-l9u if the images are very close (in terms of contrast where there color changes are minimal), you may want to employ a different masking technique instead of SAM (e.g. Dino Grounding - th-cam.com/video/TFfKE3Jyy-w/w-d-xo.html)

    • @SejalDatta-l9u
      @SejalDatta-l9u 2 หลายเดือนก่อน +1

      @@GrocksterRox Haha you read my mind!
      I was trying just that before I saw your message.
      I've got this good feeling about using a vision LLM to do the segmentation. Problem is, while I can get the LLM to identify the objects that I want - I don't know how to use the segmentation to do anything interesting with it...yet.
      Have you figured it out?

    • @GrocksterRox
      @GrocksterRox  2 หลายเดือนก่อน

      @@SejalDatta-l9u I think it would depend on the particular image/situation/end goal. Feel free to join my discord and we can quick chat to hopefully get you on the right path - discord.gg/28fQvytzCv

  • @green_anger
    @green_anger 4 หลายเดือนก่อน +1

    Good stuff! As others mentioned not many people go beyond simple tech explanation and show some applications, you do that nicely, although maybe it gets a little hasty (yeah, I understand that you need to limit the video to make it approachable) and it implies some previous knowledge. The last maybe not so critical since it's all in the workflow you share (thank you for that). Some criticism. You title the video and talk about poses but never use pose controlnet, it may be confusing for people less familiar with it. Although I must say that extracting the pose from screenshots you used may not always be possible (tried that long time ago), so your approach may be even better, it's just terminology that is confusing. As a less flexible alternative you can always search for photos with required pose and use one of the pose models, e.g. DWPose. Also, I remember there was a plugin for a1111 similar to the third option you mentioned, that or another repo it was built on top of could be used as a separate local app. It's a bit more complicated to setup but gives you the same features locally, plus it can generate the "sticky man" that you can directly use in comfy by uploading as an image.

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน

      Thank you - this is awesome feedback! To address a few points:
      * Yes this video in particular was a bit tough to keep shorter given the longer explainer around the posing sites, but point taken that there were more assumptions around people's foundations (Generally I try to keep the pace and explanations at an intermediate level)
      * Valid point about using DW Pose (I love doing that), but to your point, extracting the pose is not consistent all the time, and I've found this method to be much more reliable. However you can definitely use DW Pose in particular circumstances
      * Searching for photos is definitely a valid approach, though once you go beyond typical poses, the really specialized poses are much easier via these posing site options
      All great feedback and thanks again!

  • @nabilalhusail4731
    @nabilalhusail4731 4 หลายเดือนก่อน +1

    Ummm... i came here from youtube suggestions, so i only know the title
    But i feel confused!
    Is this video about posing characters in SD, or about templates or composition?

    • @nabilalhusail4731
      @nabilalhusail4731 4 หลายเดือนก่อน +1

      I'm not saying it's not interesting. It is, but feel like i need to watch a bunch of videos from the creator to understand what's happening

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน

      Hi and welcome! This video is around posing characters in stable diffusion (using ComfyUI). There's several techniques in the video, but if you haven't used ComfyUI, this video is a bit more intermediate/advanced and I'd recommend starting with a video to get you started (it's an AMAZING platform) - th-cam.com/video/NaP_PfR7qiU/w-d-xo.html
      Happy to help support and good luck!

  • @syntartica
    @syntartica 4 หลายเดือนก่อน +1

    Top!

    • @GrocksterRox
      @GrocksterRox  4 หลายเดือนก่อน

      Thank you so much!