Stable Diffusion ControlNet tutorial by Sebastian Kamph

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 พ.ย. 2024

ความคิดเห็น •

  • @Vanced2Dua
    @Vanced2Dua 2 หลายเดือนก่อน +1

    Mantap... Lanjutkan tutorial A1111 saya sangat menyukai

    • @ThinkDiffusion
      @ThinkDiffusion  2 หลายเดือนก่อน

      Thank you for the positive feedback, more tutorials coming every week!

  • @arsletirott
    @arsletirott 27 วันที่ผ่านมา +1

    fantastic. thanks for the guide. I just wonder why you choose "default negative" aside from "digital painting", i am new to this and ive never used "default negative", what does it do?
    EDIT: it worked well with the word "cowboy" but when i type "green monster boy" the result becomes a mess and not at all like the cowboy ballerina

    • @ThinkDiffusion
      @ThinkDiffusion  19 วันที่ผ่านมา

      Hi there!
      Default negative is a preset of negative prompts that works well for most of the images in sd 1.5. That is why Sebastian chose it :)

  • @Cu-gp4fy
    @Cu-gp4fy 2 หลายเดือนก่อน +1

    seems to have more control than mid journey nice!

    • @ThinkDiffusion
      @ThinkDiffusion  2 หลายเดือนก่อน

      Yes, with Stable Diffusion you have way more control than any other AI image generators:)

  • @darbycarpenter3032
    @darbycarpenter3032 หลายเดือนก่อน +1

    I have the Canny button but there is no model. I went to the model folder and nothing is there. All that I have is the openpose. Could you post a link to the Canny model?

    • @ThinkDiffusion
      @ThinkDiffusion  หลายเดือนก่อน

      Hi there!
      Of course, here you go: huggingface.co/lllyasviel/sd-controlnet-canny
      Happy generating!

  • @4thObserver
    @4thObserver 2 หลายเดือนก่อน +1

    I usually do manual inpainting for fingers instead, Much more accurate that way and gives us humans something left to do. (Lol.) ControlNet I'll use for when I want a real photo as the pose reference or a specific architecture as background because you can layer these together.

    • @ThinkDiffusion
      @ThinkDiffusion  2 หลายเดือนก่อน

      Good point, yes it can be more accurate and if you enjoy the process of doing it that's all that matters!

  • @gandonius_me
    @gandonius_me 2 หลายเดือนก่อน

    Hello, I would like to ask this question. Why do I set openpose_full and control_v11p_sd15_scribble [d4ba51ff] then I get the skeleton of the image (source) then I click generate, but the image is created according to the prompt without a skeleton, it’s just created according to the prompt