Revolutionizing AI Image Generation: Boosting Speed with LCM Models!
ฝัง
- เผยแพร่เมื่อ 4 ต.ค. 2024
- Let's check out the cool stuff happening in the AI world with image creation! We're diving into this technique called Latent Consistency Models (LCM) that helps you make images three times faster without messing with the reliable diffusion models!
** Links from the Video Tutorial **
LCM Website: latent-consist...
LCM Github: github.com/luo...
LCM LORA SDXL: huggingface.co...
LCM LORA SDV1.5: huggingface.co...
For the Video Node: github.com/Nuk...
Workflow** : www.patreon.co...
** Let me be EXTREMELY clear: I don't want you to feel obligated to join my Patreon just to access this workflow. My Patreon is there for those who genuinely want to support my work. If you're interested in the workflow, feel free to watch the video - it's not that long, I promise! 🙏
❤️❤️❤️Support Links❤️❤️❤️
Patreon: / dreamingaichannel
Buy Me a Coffee ☕: ko-fi.com/C0C0...
Video from Videvo by Freepik: www.videvo.net...
Thanks for the more detailed breakdown of LCMs . They make WAY more sense now.
Your channel is greatly underrated, thanks for all the info!.
Most fasterest than Everest!
WoW, thanks for these tutorial it faster than ever 👍
best aituber
Been playing with LCM stuff, not sure if it's just me, but images/video seem a bit darker/burnt than non-LCM, so I have to color-grade it a bit to compensate.
Good content. I'm going to mess around and do a workflow throwing this into a upscaler after animatediff then combining the video much further down the workflow after things like facedetailer and hires are applied.
Nevermind... won't run with animatediff yet... at least with 1.5 getting errors loading weights.
I've used with animatediff!! But I'm using animatediff Evo , you can try that!
@@DreamingAIChannel unfortunately its what I'm using at the moment and still pulling the error loading the weights. Appears to be a known issue so maybe its model specific though I've tried several. Oh well its late here so who cares if this run takes 7 hours. ;)
I was excited about LCM but after a few weeks of using it I ended up not using it in the workflow anymore: the quality is much lower than other workflows and I need extra process or pass to reach the standard quality I seek. At the end it is much slower using LCM than not using it unless you just use it for batch productions.
Does this work with animateDiff + controlnet at the same time?
I don't know for the "normal" version but with animatediff Evo it works!
Actually it does, the only thing that's a paramter for this lora is the number of steps and cfg.
So it can work with literally anywork.
Upscaling is one such place which where the number of steps is already less, so introducing LCM helps.
A preview is visible in the LoadVideo. but //
ERROR:root:Failed to validate prompt for output 89:
ERROR:root:* LoadVideo 88:
ERROR:root: - Value not in list: video: '_import_624e6fd3580914.57382403.mp4' not in []
ERROR:root:Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} //
What's the solution? Thank you.