ComfyUI Video to Video Animation with Animatediff LCM Lora & LCM Sampler

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 พ.ย. 2024

ความคิดเห็น • 60

  • @information4society
    @information4society 8 หลายเดือนก่อน +1

    Thanks so much for the video. You have been a huge help as I transition from A1111 to ComfyUI. keep up the great work and I hope your channel blows up

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      Your words of support are really meaningful. I'm glad the videos helped you make the switch to ComfyUI. Let's keep growing together!

    • @information4society
      @information4society 8 หลายเดือนก่อน

      definately. I've been making music videos with Deforum + PARSEQ but the LCM with AnimattedDIF was so much quicker. I've been looking at how to upscale the vids; been watching a guy Stephan Taul, but I don't have the foundation to understand and follow him yet. You do a great job of not losing a creator at my level of understanding. thanks@@goshniiAI

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน +1

      @@information4society I'm glad the videos are helpful for creators at all levels of understanding, and I appreciate you taking the time to share your experience. It's very encouraging.

  • @lordmo3416
    @lordmo3416 14 วันที่ผ่านมา

    Thanks for saving me hours of experimentation... you're most kind

    • @goshniiAI
      @goshniiAI  14 วันที่ผ่านมา

      You are most welcome, Lord. Thank you for your feedback.

  • @NERDDISCO
    @NERDDISCO 8 หลายเดือนก่อน

    I'm so much looking forward to try this out once I have some time! Thank you very much!!!

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน +1

      You are most welcome. Happy creating, and thank you for your support!

  • @M--S
    @M--S หลายเดือนก่อน

    Very good explanation! Thank you!

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Your are welcome, I appreciate your feedback.

    • @M--S
      @M--S หลายเดือนก่อน

      @@goshniiAI By the way, I found out how to make it easier for me to follow your explanations: cut the speed down to 50%! 😉 Then you sound like having drunk half a bottle of whisky (you really must try it - I mean listening to the speed reduction, not the whisky), but the thus reduced speed of your thoughts matches with my ability of digesting it. 🙃😊🙏

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      ​@@M--S LOL! Good one! I am glad you are sharing your experience of having fun while also learning. :)

  • @codestuff2821
    @codestuff2821 29 วันที่ผ่านมา

    Concise demonstration

    • @goshniiAI
      @goshniiAI  29 วันที่ผ่านมา

      Thank you for your encouraging feedback!

  • @swoodc
    @swoodc 8 หลายเดือนก่อน

    your last video was great thankyou for workflow help since I dont know what im doing. i just started watching this vid so hopefully its fire too

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      I appreciate your feedback. It's encouraging to hear the workflow was useful.

  • @bonsai-effect
    @bonsai-effect 8 หลายเดือนก่อน

    Great tutorial Goshnii .. can't wait to try.

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      Have fun creating! thank you for your lovely feedback

  • @ValleStutz
    @ValleStutz 8 หลายเดือนก่อน

    Works very well! Thank you! Any method to get rid of the flicker/morphing?

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      I hope you had some exciting results. Thank you for your feedback. I'll need to research flickers before deciding on a topic for future videos.

  • @kattarsisss
    @kattarsisss 8 หลายเดือนก่อน

    Thant you very much for your videos!!)))

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      I appreciate hearing from you, and you are very welcome.

  • @SejalDatta-l9u
    @SejalDatta-l9u 5 หลายเดือนก่อน

    Great video!
    A few quick questions.
    1. Can you show an instance of image to video using the lcm method? Image of a person, copying the movement of a video. Think depose etc.
    2. How would you treat a situation where you have a person in a video clip, but when translated to dwpose, some of the movement is cut off screen?
    3. Do you have a lcm video that you've upscale to keep the quality and fix and deformed faces?
    You've earned a loyal subscriber my friend!

    • @goshniiAI
      @goshniiAI  5 หลายเดือนก่อน

      Hello there, and Thank you for your support!
      i believe the use of controlnet can help with question 1
      When dealing with movements cut off by the screen in DWPose, ensure your subject is fully in frame throughout the video. Cropping or resizing the clip might help.
      Upscaling LCM Videos and Fixing Deformed Faces
      you can include a Hires resfix to the workflow or Tools like Topaz Video AI can help upscale and refine the details of your animation.

  • @mrvoteps
    @mrvoteps หลายเดือนก่อน

    I've run into issue:
    VHS_LoadVideo
    cannot allocate array memory
    is there some limits for how long the video can be? or what quality it is?

    • @goshniiAI
      @goshniiAI  29 วันที่ผ่านมา

      You may be running out of memory; check that the original video matches the batch fame you are also using in Comfyui.
      You can also lower the resolution of the frame size to save some memory, then use a video upscaler.

    • @mrvoteps
      @mrvoteps 24 วันที่ผ่านมา

      @@goshniiAI the batch frame in the latent noise? i didnt notice it need to match, the batch is the number of frames in the video?

    • @goshniiAI
      @goshniiAI  24 วันที่ผ่านมา

      @@mrvoteps That's great! I'm glad to read that everything is going well.

  • @sudabadri7051
    @sudabadri7051 8 หลายเดือนก่อน

    Awesome thanks mate

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      i'm glad i could assist and grateful for your feedback.

  • @SanderBos_art
    @SanderBos_art 7 หลายเดือนก่อน

    great tutorial :) I was wondering though what parameter is influencing how close the output still resembles the original video? Is it the cfg?

    • @goshniiAI
      @goshniiAI  7 หลายเดือนก่อน

      Yes, that is accurate, however, the CFG for LCM is recommended to be between 1 and 2. As a suggestion, you can continue experimenting to see the results.
      Also, the prompt had an impact on the original video's style.

  • @Nankatsu09
    @Nankatsu09 หลายเดือนก่อน

    Thank you! Is there a way to keep Video Consistency?

    • @goshniiAI
      @goshniiAI  29 วันที่ผ่านมา +1

      To keep things consistent, guidance with ControlNet Canny and including Optical Flow, can help align details between frames.
      You can also achieve this by increasing the strength of the Contolnet model.

    • @Nankatsu09
      @Nankatsu09 28 วันที่ผ่านมา

      @@goshniiAI Thank you very much mate!! Gonna try it out!

  • @omarzaghloul6169
    @omarzaghloul6169 5 หลายเดือนก่อน

    nice... it's much lighter & faster ... it works perfectly how can I make details more constant & less changing randomly for example character's hair color & clothes keep changing?

    • @goshniiAI
      @goshniiAI  5 หลายเดือนก่อน

      I'm pleased to hear it's working well for you and that it seems lighter and faster!
      You could try these few suggestions by playing with Lora weights or using a fixed seed for each frame.

    • @omarzaghloul6169
      @omarzaghloul6169 5 หลายเดือนก่อน

      @@goshniiAI Thank you for the prompt reply
      ... in your workflow, you are using multiple loRas... which one I should play with weights?

    • @goshniiAI
      @goshniiAI  5 หลายเดือนก่อน

      @@omarzaghloul6169 The Loras I used were chosen to fit the animation theme, which may not work in your instance, looking for good character Loras may be helpful in your case.

  • @gualguitv
    @gualguitv หลายเดือนก่อน

    I ran this workflow on an 8gb GPU and it took hours to run, is that normal, Do you have a workflow to do this for lower Vram, Thanks

    • @goshniiAI
      @goshniiAI  หลายเดือนก่อน

      Hello there, Thank you for giving the workflow a try! On an 8GB GPU, it can take a while depending on the complexity of the scene and the settings you're using. However, you can try reducing the resolution to speed things up then later use an upscaler to refine the details.

  • @90boiler
    @90boiler 2 หลายเดือนก่อน

    I don't understand, my result is only depth map video whatever I try. Can you post a screenshot of you final work so I could see models you use?

    • @goshniiAI
      @goshniiAI  2 หลายเดือนก่อน

      Hi there, sorry to read that, however the workflow can be downloaded for free using the link provided in the description.

    • @90boiler
      @90boiler 25 วันที่ผ่านมา

      @@goshniiAI `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.? I run it Macbook M1 Pro

    • @goshniiAI
      @goshniiAI  24 วันที่ผ่านมา

      @@90boiler For now, this setup can still produce good outcomes, although it is not as optimised as it is on NVIDIA GPUs. Some users have achieved success by reducing batch sizes or simplifying node setups to reduce system load. i hope we see better workflows soon for CPU'S

  • @MisterCozyMelodies
    @MisterCozyMelodies 8 หลายเดือนก่อน

    that`s awesome, could you tell me what your CPU, GPU, RAM ?

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน +1

      Thanks for the compliment! My setup includes an Intel Core i7 processor, an NVIDIA GeForce RTX 3060 GPU, and I have 32GB of RAM. tinyurl.com/mtwjn4bp

    • @MisterCozyMelodies
      @MisterCozyMelodies 6 หลายเดือนก่อน

      @@goshniiAI thanks

  • @cgstone30
    @cgstone30 8 หลายเดือนก่อน

    Fire drop

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      blazing feedback! Thanks a lot.

  • @phi1s0n
    @phi1s0n 7 หลายเดือนก่อน

    is lcm animatediff possible with sdxl models?

    • @goshniiAI
      @goshniiAI  7 หลายเดือนก่อน +1

      Unfortunately, this workflow is not compatible with SDXL models. I am researching and hope to share the process of using SDXL Models. I'd love that as well.

  • @Meh_21
    @Meh_21 3 หลายเดือนก่อน

    Hi! You workflow uses only controlnet2 group, lineart is bypassed. :)

    • @goshniiAI
      @goshniiAI  3 หลายเดือนก่อน

      Hello there. You are correct. :)
      The workflow has two possible processors for control net, if the sampler custom node gives no problems, you can use both.
      However, you do not have to choose to use only one; you are free to use any of the processors or switch between them entirely depending on what you require.
      thank you for the observation.

    • @Meh_21
      @Meh_21 3 หลายเดือนก่อน

      @@goshniiAI thanks to you, great workflow.

    • @goshniiAI
      @goshniiAI  3 หลายเดือนก่อน

      @@Meh_21 You are welcome! I'm grateful

  • @linashu6381
    @linashu6381 8 หลายเดือนก่อน

    Thank you very much for your sharing.
    I met a problem "DepthAnythingPreprocessor" red, I use the Manager “Install Missing Custom nodes ” node,
    but, Display error "File "D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main\__init__.py", line 1, in < module>
    from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
    ModuleNotFoundError: No module named 'inference_core_nodes'
    Cannot import D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main module for custom nodes: No module named 'inference_core_nodes' ", Excuse me, how can I solve this problem👧

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      Hello there, I tried to follow your path, but I don't have - (ComfyUI-Inference-Core-Nodes-main) - in my custom nodes installation folder. you can make sure that all the necessary files and dependencies are properly installed and located in the specified directories since our setups may be different.

  • @викторВиктор-ы5ж
    @викторВиктор-ы5ж 8 หลายเดือนก่อน

    Greetings from Russia. I loaded the video for 3 seconds, and the video harvester shows 1 second of video. How can I increase the time from 1 second to 3 seconds? I'm writing Google translation!!!

    • @goshniiAI
      @goshniiAI  8 หลายเดือนก่อน

      hello there, Glad to hear from you from Russia.
      1.Increase Frame load Cap (Load Video Node) drive.google.com/file/d/1hIm53FFZW6xW2qmY7jESERqvAEy04Dta/view?usp=sharing
      2.Increase Batch Size (Empty Latent image) drive.google.com/file/d/1tuqv9CsdtmjvN1IojzwJKSzwZZ_ckY3E/view?usp=sharing
      this numbers needs to match for the desired duration.