LCM + AnimateDiff High Definition (ComfyUI) - Turbo generation with high quality

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.ย. 2024
  • We explore in this video how to use LCM (Latent Consistency Model) Lora, which promises to speed up image and animation generation by 10 times.
    Discord: / discord
    #animatediff #comfyui #stablediffusion
    ============================================================
    💪 Support this channel with a Super Thanks or a ko-fi! ko-fi.com/koal...
    ☕ Amazing ComfyUI workflows: tinyurl.com/y9...
    🚨 Use Runpod and access powerful GPUs for best ComfyUI experience at a fraction of the price. tinyurl.com/58... 🤗
    ☁️ Starting in ComfyUI? Run it on the cloud without installation, very easy! ☁️
    👉 RunDiffusion: tinyurl.com/yp... 👉15% off first month with code 'koala15'
    👉 ThinkDiffusion: tinyurl.com/4n...
    🤑🤑🤑 FREE! Check my runnable workflows in OpenArt.ai: tinyurl.com/2t...
    ============================================================
    While LCM, in combination with AnimateDiff, does it, the detail quality is not great. However, by just adding a 2nd K sampler with few steps, but can generate an amazing animation, as good as without LCM. The extra step is, of course, at the expense of some additional rendering time. However, we can increase the speed by 2 or 3 times.
    I also show some comparison of critical parameters to use in the KSamplers, to optimize a find a better balance of generation time vs. detail. All these, using the Instant Lora method and the conditional masking method showed in previous videos.
    The workflow and frames to test this workflow are found in this Civit.AI article: tinyurl.com/w2...
    The LCM models
    LCM LORA SD1.5: tinyurl.com/yc...
    LCM LORA SDXL: tinyurl.com/ke...
    Basic requirements:
    ComfyUI: tinyurl.com/24...
    ComfyUI Manager: tinyurl.com/yc...
    Vast.ai: tinyurl.com/5n...
    Runpod: tinyurl.com/mv...
    Custom nodes:
    AnimateDiff Evolved: tinyurl.com/yr...
    Advanced ControlNet custom node: tinyurl.com/yc...
    VideoHelper Suite: tinyurl.com/47...
    ControlNet Auxiliary preprocessors: tinyurl.com/3j...
    IP Adapter: tinyurl.com/3x...
    ComfyUI Impact pack: tinyurl.com/4j...
    ComfyUI Inspire pack: tinyurl.com/2w...
    KJ Nodes: github.com/kij...
    WAS node Suite: tinyurl.com/2a...
    Models:
    DreamShaper v8 (SD1.5): tinyurl.com/3r...
    ControlNet v1.1: tinyurl.com/je...
    vae-ft-mse-840000-ema-pruned VAE: tinyurl.com/c9...
    ClipVision model for IP-Adapter: tinyurl.com/2w...
    IP Adapter plus SD1.5: tinyurl.com/2p...
    Motion Lora mm-stabilized-mid: tinyurl.com/mr...
    Upscale RealESRGANx2: tinyurl.com/2f...
    LCM LORA SD1.5: tinyurl.com/yc...
    LCM LORA SDXL: tinyurl.com/ke...
    Tracklist:
    [TBD]
    My other tutorials:
    Animatediff + ControlNet + conditional masking.i: • Animatediff perfect sc...
    AnimateDiff and Instant Lora: • AnimateDiff + Instant ...
    ComfyUI animation tutorial: • Stable Diffusion Comfy...
    Vast.ai: • ComfyUI - Vast.ai: tut...
    TrackAnything: • ComfyUI animation with...
    Videos: Pexels
    Music: TH-cam Music Library
    Edited with Canva, and ClipChamp. I record the stuff in powerpoint.
    Subscribe to Koala Nation Channel: cutt.ly/OZF0UhT
    © 2023 Koala Nation
    #comfyui #animatediff #stablediffusion #lcm

ความคิดเห็น • 26

  • @ahmetab06
    @ahmetab06 8 หลายเดือนก่อน +2

    It does a great job of making the lights disappear in the background.

  • @kkryptokayden4653
    @kkryptokayden4653 9 หลายเดือนก่อน +3

    Very good, I tweaked some stuff to meet my needs and it works great

    • @koalanation
      @koalanation  9 หลายเดือนก่อน +1

      Well done! That is the spirit!

  • @deephansda982
    @deephansda982 9 หลายเดือนก่อน +1

    Wow🔥, great optimization

  • @MrPer4illo
    @MrPer4illo 9 หลายเดือนก่อน +1

    Great work! Thanks for the workflow.
    I can't make it work with with SDXL though.
    What should be used in IPAdapter and CLIP vision nodes?

    • @koalanation
      @koalanation  9 หลายเดือนก่อน +2

      Check the IP adapters for XL, like ip-adapter-plus_sdxl_vit-h.bin (for example). Some of them work with the SD1.5 clipvision, otherwise download the SDXL clipvision. Make sure the IP adapters models are in the IP Adapter plus custom node folder

    • @MrPer4illo
      @MrPer4illo 9 หลายเดือนก่อน

      @@koalanation thanks. It didn't work, but I don't give up 🙂

    • @koalanation
      @koalanation  9 หลายเดือนก่อน +2

      With SDXL you also need the SDXL controlnets...could that be the issue? You also need the SDXL text encoders... I am working with a new workflow with the IP adapter i say before and the sd1.5 clipvision and Animatediff and works...

  • @CHNLTV
    @CHNLTV 7 หลายเดือนก่อน

    how did you create the MlSD line sequences as well as depth sequences?

    • @koalanation
      @koalanation  7 หลายเดือนก่อน

      I prepared the sequences separately, using a zoe map depth and MLSD lines preprocessors from the original background video.

  • @user-qv1in8hq5f
    @user-qv1in8hq5f 8 หลายเดือนก่อน

    Hello, sorry i didn't understand the command at 1:54 did you said ctrl+shit+b?

    • @koalanation
      @koalanation  8 หลายเดือนก่อน +3

      Ctrl shift v to paste with the connections

  • @yvann.mp4
    @yvann.mp4 9 หลายเดือนก่อน

    I don't understand how the two ksamplers are used together, as in the video only the first ksampler (lcm) is connected to the vae decode, can you please explain?

    • @koalanation
      @koalanation  9 หลายเดือนก่อน +3

      Hi. Thanks for the question.
      The Latent of the 2nd K Sampler has to be connected to a VAE decode (then later can be connected to a Video Combine Node. I skipped that part to avoid repeating myself (I showed how is connected in the first KSampler), but it seems was not clear enough. I hope this clarifies your question.

    • @yvann.mp4
      @yvann.mp4 9 หลายเดือนก่อน

      @@koalanation thank you, so it means that I process the two different k samplers, each with a vae decode and then I put the two vae decodes together to a combined video?

    • @koalanation
      @koalanation  9 หลายเดือนก่อน +1

      @@yvann.mp4 for the higher quality video, connect the latent output of the 1st to the input latent of the second. Then the latent of the second sampler to the vae decode, and the image output to video combine. In the video I just want to show the results next to each other and generate two animations, but you do not need to for the first if you don't want to

  • @yvann.mp4
    @yvann.mp4 9 หลายเดือนก่อน

    Thanks a lot !!

  • @aifreeart
    @aifreeart 9 หลายเดือนก่อน

    What should I put in the openpose and background controlnet folder?

    • @koalanation
      @koalanation  9 หลายเดือนก่อน +1

      Check out the article in Civit AI: tinyurl.com/33krneae
      And download the assets in the input.zip file.
      You have there the openposes, M-LSD lines and Zoe depth maps examples to use in the workflow.
      You can also use your own poses and backgrounds. Just make sure you are using the right controlnets and preprocessors (if needed)

    • @aifreeart
      @aifreeart 9 หลายเดือนก่อน

      thank you
      @@koalanation

  • @aifreeart
    @aifreeart 9 หลายเดือนก่อน

    good

  • @user-bc6oo4hq5p
    @user-bc6oo4hq5p 9 หลายเดือนก่อน +1

    Goof

  • @DopalearnEN
    @DopalearnEN 9 หลายเดือนก่อน +1

    Love your work! Koala - want to collaborate?

    • @koalanation
      @koalanation  8 หลายเดือนก่อน +1

      Sounds interesting. How can I reach out to you?

    • @DopalearnEN
      @DopalearnEN 8 หลายเดือนก่อน

      My comments are getting deleted for some odd reason. Did you see my response?

    • @koalanation
      @koalanation  8 หลายเดือนก่อน +1

      Hi! I can see this response but no other message...not in held for review...