definately. I've been making music videos with Deforum + PARSEQ but the LCM with AnimattedDIF was so much quicker. I've been looking at how to upscale the vids; been watching a guy Stephan Taul, but I don't have the foundation to understand and follow him yet. You do a great job of not losing a creator at my level of understanding. thanks@@goshniiAI
@@information4society I'm glad the videos are helpful for creators at all levels of understanding, and I appreciate you taking the time to share your experience. It's very encouraging.
@@goshniiAI By the way, I found out how to make it easier for me to follow your explanations: cut the speed down to 50%! 😉 Then you sound like having drunk half a bottle of whisky (you really must try it - I mean listening to the speed reduction, not the whisky), but the thus reduced speed of your thoughts matches with my ability of digesting it. 🙃😊🙏
Great video! A few quick questions. 1. Can you show an instance of image to video using the lcm method? Image of a person, copying the movement of a video. Think depose etc. 2. How would you treat a situation where you have a person in a video clip, but when translated to dwpose, some of the movement is cut off screen? 3. Do you have a lcm video that you've upscale to keep the quality and fix and deformed faces? You've earned a loyal subscriber my friend!
Hello there, and Thank you for your support! i believe the use of controlnet can help with question 1 When dealing with movements cut off by the screen in DWPose, ensure your subject is fully in frame throughout the video. Cropping or resizing the clip might help. Upscaling LCM Videos and Fixing Deformed Faces you can include a Hires resfix to the workflow or Tools like Topaz Video AI can help upscale and refine the details of your animation.
You may be running out of memory; check that the original video matches the batch fame you are also using in Comfyui. You can also lower the resolution of the frame size to save some memory, then use a video upscaler.
Yes, that is accurate, however, the CFG for LCM is recommended to be between 1 and 2. As a suggestion, you can continue experimenting to see the results. Also, the prompt had an impact on the original video's style.
To keep things consistent, guidance with ControlNet Canny and including Optical Flow, can help align details between frames. You can also achieve this by increasing the strength of the Contolnet model.
nice... it's much lighter & faster ... it works perfectly how can I make details more constant & less changing randomly for example character's hair color & clothes keep changing?
I'm pleased to hear it's working well for you and that it seems lighter and faster! You could try these few suggestions by playing with Lora weights or using a fixed seed for each frame.
@@omarzaghloul6169 The Loras I used were chosen to fit the animation theme, which may not work in your instance, looking for good character Loras may be helpful in your case.
Hello there, Thank you for giving the workflow a try! On an 8GB GPU, it can take a while depending on the complexity of the scene and the settings you're using. However, you can try reducing the resolution to speed things up then later use an upscaler to refine the details.
@@goshniiAI `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.? I run it Macbook M1 Pro
@@90boiler For now, this setup can still produce good outcomes, although it is not as optimised as it is on NVIDIA GPUs. Some users have achieved success by reducing batch sizes or simplifying node setups to reduce system load. i hope we see better workflows soon for CPU'S
Unfortunately, this workflow is not compatible with SDXL models. I am researching and hope to share the process of using SDXL Models. I'd love that as well.
Hello there. You are correct. :) The workflow has two possible processors for control net, if the sampler custom node gives no problems, you can use both. However, you do not have to choose to use only one; you are free to use any of the processors or switch between them entirely depending on what you require. thank you for the observation.
Thank you very much for your sharing. I met a problem "DepthAnythingPreprocessor" red, I use the Manager “Install Missing Custom nodes ” node, but, Display error "File "D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main\__init__.py", line 1, in < module> from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS ModuleNotFoundError: No module named 'inference_core_nodes' Cannot import D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main module for custom nodes: No module named 'inference_core_nodes' ", Excuse me, how can I solve this problem👧
Hello there, I tried to follow your path, but I don't have - (ComfyUI-Inference-Core-Nodes-main) - in my custom nodes installation folder. you can make sure that all the necessary files and dependencies are properly installed and located in the specified directories since our setups may be different.
Greetings from Russia. I loaded the video for 3 seconds, and the video harvester shows 1 second of video. How can I increase the time from 1 second to 3 seconds? I'm writing Google translation!!!
hello there, Glad to hear from you from Russia. 1.Increase Frame load Cap (Load Video Node) drive.google.com/file/d/1hIm53FFZW6xW2qmY7jESERqvAEy04Dta/view?usp=sharing 2.Increase Batch Size (Empty Latent image) drive.google.com/file/d/1tuqv9CsdtmjvN1IojzwJKSzwZZ_ckY3E/view?usp=sharing this numbers needs to match for the desired duration.
Thanks so much for the video. You have been a huge help as I transition from A1111 to ComfyUI. keep up the great work and I hope your channel blows up
Your words of support are really meaningful. I'm glad the videos helped you make the switch to ComfyUI. Let's keep growing together!
definately. I've been making music videos with Deforum + PARSEQ but the LCM with AnimattedDIF was so much quicker. I've been looking at how to upscale the vids; been watching a guy Stephan Taul, but I don't have the foundation to understand and follow him yet. You do a great job of not losing a creator at my level of understanding. thanks@@goshniiAI
@@information4society I'm glad the videos are helpful for creators at all levels of understanding, and I appreciate you taking the time to share your experience. It's very encouraging.
Thanks for saving me hours of experimentation... you're most kind
You are most welcome, Lord. Thank you for your feedback.
I'm so much looking forward to try this out once I have some time! Thank you very much!!!
You are most welcome. Happy creating, and thank you for your support!
Very good explanation! Thank you!
Your are welcome, I appreciate your feedback.
@@goshniiAI By the way, I found out how to make it easier for me to follow your explanations: cut the speed down to 50%! 😉 Then you sound like having drunk half a bottle of whisky (you really must try it - I mean listening to the speed reduction, not the whisky), but the thus reduced speed of your thoughts matches with my ability of digesting it. 🙃😊🙏
@@M--S LOL! Good one! I am glad you are sharing your experience of having fun while also learning. :)
Concise demonstration
Thank you for your encouraging feedback!
your last video was great thankyou for workflow help since I dont know what im doing. i just started watching this vid so hopefully its fire too
I appreciate your feedback. It's encouraging to hear the workflow was useful.
Great tutorial Goshnii .. can't wait to try.
Have fun creating! thank you for your lovely feedback
Works very well! Thank you! Any method to get rid of the flicker/morphing?
I hope you had some exciting results. Thank you for your feedback. I'll need to research flickers before deciding on a topic for future videos.
Thant you very much for your videos!!)))
I appreciate hearing from you, and you are very welcome.
Great video!
A few quick questions.
1. Can you show an instance of image to video using the lcm method? Image of a person, copying the movement of a video. Think depose etc.
2. How would you treat a situation where you have a person in a video clip, but when translated to dwpose, some of the movement is cut off screen?
3. Do you have a lcm video that you've upscale to keep the quality and fix and deformed faces?
You've earned a loyal subscriber my friend!
Hello there, and Thank you for your support!
i believe the use of controlnet can help with question 1
When dealing with movements cut off by the screen in DWPose, ensure your subject is fully in frame throughout the video. Cropping or resizing the clip might help.
Upscaling LCM Videos and Fixing Deformed Faces
you can include a Hires resfix to the workflow or Tools like Topaz Video AI can help upscale and refine the details of your animation.
I've run into issue:
VHS_LoadVideo
cannot allocate array memory
is there some limits for how long the video can be? or what quality it is?
You may be running out of memory; check that the original video matches the batch fame you are also using in Comfyui.
You can also lower the resolution of the frame size to save some memory, then use a video upscaler.
@@goshniiAI the batch frame in the latent noise? i didnt notice it need to match, the batch is the number of frames in the video?
@@mrvoteps That's great! I'm glad to read that everything is going well.
Awesome thanks mate
i'm glad i could assist and grateful for your feedback.
great tutorial :) I was wondering though what parameter is influencing how close the output still resembles the original video? Is it the cfg?
Yes, that is accurate, however, the CFG for LCM is recommended to be between 1 and 2. As a suggestion, you can continue experimenting to see the results.
Also, the prompt had an impact on the original video's style.
Thank you! Is there a way to keep Video Consistency?
To keep things consistent, guidance with ControlNet Canny and including Optical Flow, can help align details between frames.
You can also achieve this by increasing the strength of the Contolnet model.
@@goshniiAI Thank you very much mate!! Gonna try it out!
nice... it's much lighter & faster ... it works perfectly how can I make details more constant & less changing randomly for example character's hair color & clothes keep changing?
I'm pleased to hear it's working well for you and that it seems lighter and faster!
You could try these few suggestions by playing with Lora weights or using a fixed seed for each frame.
@@goshniiAI Thank you for the prompt reply
... in your workflow, you are using multiple loRas... which one I should play with weights?
@@omarzaghloul6169 The Loras I used were chosen to fit the animation theme, which may not work in your instance, looking for good character Loras may be helpful in your case.
I ran this workflow on an 8gb GPU and it took hours to run, is that normal, Do you have a workflow to do this for lower Vram, Thanks
Hello there, Thank you for giving the workflow a try! On an 8GB GPU, it can take a while depending on the complexity of the scene and the settings you're using. However, you can try reducing the resolution to speed things up then later use an upscaler to refine the details.
I don't understand, my result is only depth map video whatever I try. Can you post a screenshot of you final work so I could see models you use?
Hi there, sorry to read that, however the workflow can be downloaded for free using the link provided in the description.
@@goshniiAI `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.? I run it Macbook M1 Pro
@@90boiler For now, this setup can still produce good outcomes, although it is not as optimised as it is on NVIDIA GPUs. Some users have achieved success by reducing batch sizes or simplifying node setups to reduce system load. i hope we see better workflows soon for CPU'S
that`s awesome, could you tell me what your CPU, GPU, RAM ?
Thanks for the compliment! My setup includes an Intel Core i7 processor, an NVIDIA GeForce RTX 3060 GPU, and I have 32GB of RAM. tinyurl.com/mtwjn4bp
@@goshniiAI thanks
Fire drop
blazing feedback! Thanks a lot.
is lcm animatediff possible with sdxl models?
Unfortunately, this workflow is not compatible with SDXL models. I am researching and hope to share the process of using SDXL Models. I'd love that as well.
Hi! You workflow uses only controlnet2 group, lineart is bypassed. :)
Hello there. You are correct. :)
The workflow has two possible processors for control net, if the sampler custom node gives no problems, you can use both.
However, you do not have to choose to use only one; you are free to use any of the processors or switch between them entirely depending on what you require.
thank you for the observation.
@@goshniiAI thanks to you, great workflow.
@@Meh_21 You are welcome! I'm grateful
Thank you very much for your sharing.
I met a problem "DepthAnythingPreprocessor" red, I use the Manager “Install Missing Custom nodes ” node,
but, Display error "File "D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main\__init__.py", line 1, in < module>
from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
ModuleNotFoundError: No module named 'inference_core_nodes'
Cannot import D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main module for custom nodes: No module named 'inference_core_nodes' ", Excuse me, how can I solve this problem👧
Hello there, I tried to follow your path, but I don't have - (ComfyUI-Inference-Core-Nodes-main) - in my custom nodes installation folder. you can make sure that all the necessary files and dependencies are properly installed and located in the specified directories since our setups may be different.
Greetings from Russia. I loaded the video for 3 seconds, and the video harvester shows 1 second of video. How can I increase the time from 1 second to 3 seconds? I'm writing Google translation!!!
hello there, Glad to hear from you from Russia.
1.Increase Frame load Cap (Load Video Node) drive.google.com/file/d/1hIm53FFZW6xW2qmY7jESERqvAEy04Dta/view?usp=sharing
2.Increase Batch Size (Empty Latent image) drive.google.com/file/d/1tuqv9CsdtmjvN1IojzwJKSzwZZ_ckY3E/view?usp=sharing
this numbers needs to match for the desired duration.