You just made me redo my whole workflow since this alone alows me to iterate on ideas so much faster. This stuff moves so damn fast. I get off youtube for a week and so much changes in AI.
Idea here is that you just put lcm lora with any model, and also any sampler and scheduler, you don't have to use lcm as sampler, you can use slow ones like heun and get great results but so much faster. Also with Comfy Efficient Nodes it feels like XY plot with 5x5 is made as fast as 1 image before. Try that, see the difference 🙂
I tried it with other samplers instead of lcm, and the results were actually terrible no matter if I did a few or my usual number of steps and no matter what cfg value 😆 However, when using lcm set to get good results I found no matter which scheduler I used, all images were very similar so I ended up using karras because it was (marginally) the fastest. Even ddim_uniform which Scott mentions not seeming to work with lcm gave great results - just very different from all the other similar looking images.
Thanks for the information, I want extremely detailed hyper realistic images and this helps out a lot adding a sampler before my main sampler, also doing face swapping and supir upscaling in the same workflow results are terrific and also about 10-15 seconds faster per pic now as well.
Great Scott (pun intended)... This is amazing. SDXL may be back on the table again. 1024 sdxl generation went from less than 20 seconds for one image to less than 5... =0
Hi! I'm trying it with AnimateDiff but I keep getting this error "'VanillaTemporalModule' object has no attribute 'cons", have you also got it / any solutions? thanks!!
Thank you for answering. Yeah to reach that I use mostly inpaint and some combos of AI adapter, posex, and control net a bit helps. Found very cool thing to use right lora over but it seems that it look to similar character to pervious.@@sedetweiler
Sometimes it's hard to find the model you're using on Huggingface. Did you rename the downloaded LoRA? Edit: I think I found it. I downloaded pytorch_lora_weights.safetensors from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match yours.
Yes, sorry. They are all named the same thing. You will always need to rename them. I should have mentioned that, but this is a very consistent thing with the names being generic.
Thanks for the tutorial. I wonder how much amazing stuff is hiding in comfyui and the stable diffusion world that we'd never know about without your videos.
There is a ton! There are also things in AUTO1111 that no one has covered yet that I will probably make videos on as well. So much, and it is constantly evolving!
I'm at the download site crossroads feeling lost: Latent Consistency Models LoRAs vs Latent Consistency Models Weights. I downloaded one of each no wiser, it's not eady being a noob. PS: pytorch_lora_weights.safetensors = 380MB other = 4.5GB My guess it's the smaller.
@@bwheldale Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA.
Even with this LoRa (1.5) it’s taking me 2 minutes to generate a single image. Is my MacBook Pro (8gb ram) actually that bad, or is there something else I could be mission?
Not with SGM_Uniform since it adds constant noise and just keeps changing but I got higher quality on SD15 with 20 steps but with exponential sampler. For SDXL I had the best results also with exponential around 8 steps so far.
I'm having a technical question, in the video you say, cfg values of 1 or below make the sampler ignore the negative prompt. I have no idea if this is true because I usually have no or just a short negative prompt so I could not see a difference in the images. But what I saw - and this brings me to the technical question: at a cfg value of 1.0 (and only there, neither above nor below 1), the steps took only about 50% to 60% percent of the time as usual with my GPU - so could it be there is something happening exactly at 1.0 that is different from the other values, like it's ignoring the negative prompt only at 1, but not the other values? And if so, is there a way for people who usually don't use negative prompts, to somehow speed up the rendering by making the K Sampler ignore the negative prompt? Because the speed increase is only at cfg 1, simply leaving the prompt empty at e.g. higher values does not work. Unplugging the negative prompt does not work either because it simply throws an error for an unconnected socket and doesn't start to render. Maybe this is not even happening on other machines... but if so, I would really like to find a way to deliberately ignore the negative prompt even at higher cfg values (because my images with cfg 1 are most of the time not detailed enough). Or maybe this has nothing to do with ignoring the negative prompt and is just happening because of that specific cfg value?
Hi Scott, great video, thank you. When doing "vid2vid" with comfy and animdiff/controlnet... Do you pass the video frames straight into the ksampler, or do you push empty latents into it? I'm getting sub par results with the former, and have not tried the latter yet.
@sedetweiler results have gotten better due to some help from the wonderful Coffee Vectors and Purz and others, sd1.5 vid2vid working quite well with lcm and anim diff now, results on my Twitter from last night. Defo still needs some tweaks. Sdxl vid2vid lcm anim diff is still proving a little more elusive, but I am doing a lot of tests to find the right combo of weights and control nets. Results coming soon. Thank you for your amazing walkthroughs, you've really helped get me going with comfy in the recent weeks.
@@sedetweiler Updated all and it's there now, thank you! I was running around 16 steps with DDIM but the quality of LCM is even better quality with 4 steps than it was with DDIM 16 steps. Do you know if quality increases exponentially with LCM or does it max at 4?
It really feels like you skipped 5 steps here, considering that it is not at all clear how to get lcm into the sampler_name or that you renamed the lora. Can you sticky a comment that explains a couple extra steps to get to your initial position?
Yeah he wants a cash grab... does not care about people watching his youtube or why would he do that. Here... Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA. Just rename the downloaded LoRA pytorch_lora_weights.safetensors from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match his.
@@gordonbrinkmannSo I went through ComfyUI_Manager and selected Update ComfyUI. I still don’t have the lcm sampler. It’s a relatively new install altogether, maybe Friday. Any idea what I am doing wrong?
@@whatwherethere I updated it after watching the video and not seeing the lcm sampler, then I restarted ComfyUI - did you restart it? The changes will only be implemented after closing ComfyUI and starting it again. That was all I did, then there was the lcm sampler.
Thank you! this is useful for quickly making video frames. I use comfyroll , works well with it (but it's not easy - maybe you could make a tutorial? - see what i did here ;) - great vid as usual.
for some reason this lora unloads the checkpoint from memory after every generation so that you have to wait a whole minute for it to load back into memory before the ksampler even starts to do anything. I seen the processes in the background status window so that's how I know. Using sdxl without the lora there is no 60 sec wait for the model to load before the ksampler starts making an image.
And you are pulling the latest from all of the extensions? These releases are less than a day old, so everything needs to be updated to keep up. Sorry you are unable to find it, but it isn't hidden, something just isn't updating.
You can actually see it here in the comfy code, added 3 days ago. You need to be sure to do a "git pull" on comfy. github.com/comfyanonymous/ComfyUI/commit/002aefa382585d171aef13c7bd21f64b8664fe28
doesn't work for me. Get out of memory errors with 8gb ram, 4gb 1650 for sdxl. Comfyui normally runs sdxl on this system but really really slow but at least it works.
hi, thanks, anybody know if the LCM lora will lead to a low quality of images? because it is too fast so l am totally confused and can't help stop thinking its quality? anyone?
Don't forget to install WAS Node Suite and LCM Sampler with the manager before trying to build this workflow. Also, your ModelSamplingDiscrete seems mostly useless, I have 3 loras chained and adding your MSD node makes no remarkable difference or improvement whatsoever.
4GB VRAM normally takes 2 to 5 minutes, this takes 20 seconds. Great for use as a starting point!
You just made me redo my whole workflow since this alone alows me to iterate on ideas so much faster. This stuff moves so damn fast. I get off youtube for a week and so much changes in AI.
Yeah, it is a bit insane for sure.
Running on a 4090 and it's incredible with Animate Diff. What a dream. Thanks for the incredible video.
Glad you enjoyed it!
This is awesome, it even works on the SDXL turbo models taking my time from about a minute per sample to ~14secs.
Thanks! I've got a pretty slow system and it greatly improved the speed. 45 sec/img down from 3+ min/img👍👍
Great to hear!
Idea here is that you just put lcm lora with any model, and also any sampler and scheduler, you don't have to use lcm as sampler, you can use slow ones like heun and get great results but so much faster. Also with Comfy Efficient Nodes it feels like XY plot with 5x5 is made as fast as 1 image before. Try that, see the difference 🙂
That's a great idea!
I tried it with other samplers instead of lcm, and the results were actually terrible no matter if I did a few or my usual number of steps and no matter what cfg value 😆
However, when using lcm set to get good results I found no matter which scheduler I used, all images were very similar so I ended up using karras because it was (marginally) the fastest. Even ddim_uniform which Scott mentions not seeming to work with lcm gave great results - just very different from all the other similar looking images.
Could you please explain what did you mean on 4:40, I need to use ModelSamplingDiscrete(lcm) node AFTER LCM-lora? If i want to stack loras
I have ADHD so I love your short but very inforamtive videos, plz don't stop!
Thanks for the information, I want extremely detailed hyper realistic images and this helps out a lot adding a sampler before my main sampler, also doing face swapping and supir upscaling in the same workflow results are terrific and also about 10-15 seconds faster per pic now as well.
Yeah this is a instant tool now to get the prompts and weights close before I really get to work.
Would love video time on swarm. Really want to run generations on it but still a little awkward
Does the model still exist? I went to the link but can't find the same lora.
keep 'em coming bro!
oh yes! you know it!
Thank you for showing all theses new features ! AI still has so much to show to the world !
It sure is coming along fast!
It's fantastic for quick prompt experimenting!
It really is!
Impressionant,merci pour le tips
Is this the same with Mac M1? Because i have lcm and lora on my comfyUI but still the loading time is around 300sec.
Works like a charm for me. Thank you!!!
Great to hear!
Was lost without how to get this installed... figured it out and already love it. Thanks for all your great demos and tutorials
Great to hear!
How do I get it installed?@@sedetweiler
just put the lora into the lora folder, as a normal lora!
If you switch sdxl model for segmind ssd cut down model this workflow works even faster and it will work on low end 1650 4gb and 8 GB ram laptop
Could you make a video on how to properly connect this lora with sdxl base and refiner from earlier video?
We do that in live streams as well. It's very similar to what we did here.
Great Scott (pun intended)... This is amazing. SDXL may be back on the table again. 1024 sdxl generation went from less than 20 seconds for one image to less than 5... =0
It's pretty wicked for sure!
@@sedetweiler Unfortunate that It seems to degrade quickly with cntrolnet added in the flow. =\
My ksampler doesnt show an image like Scott's does, can someone advise why? is it a special ksampler?
Hi! I'm trying it with AnimateDiff but I keep getting this error "'VanillaTemporalModule' object has no attribute 'cons", have you also got it / any solutions? thanks!!
I will have to mess around with it.
Where can I find the LCM Sampler for my Ksampler? Its not in my list and I just updated my Comfy.
I have the same issue
Just update comfy ui through the manager and restart.
Yes, always be sure you are updated. 99.9% of the time that will be the cause of most issues.
workflow posted?
If you're on A1111 and don't have LCM sampler, Euler A works well enough to test this. (It's not perfect, but it's usable.)
[edit] - an update fixed it
nope.
There is no sampler_name "lcm". I had to try euler_ancestral, but that looks mostly shit.
I had the same until I updated comfyui via manager -> update comfyui (The update .bat file didn't do it but through manager did)
yeah the LCM k-sampler just came out today, you need to update Comfy
Update comfy. This is less that 24 hours old.
@@sedetweiler
I updated and see it, thanks!
I now have a problem with LoRAs not being used, but I'll look around for answers.
Is it possible to combine it with animatediff? I ran into a lot of errors when I tried, and a lot of models don't seem to be compatible
I have seen people doing so, but I don't tend to do a lot of animation.
don't you have to connect the LoadLora to the prompt? Does it really matter?
wow - do the LCM Loras only work with ComfyUI or does it also work in A1111?
I have no idea on A1111.
it does work with gtx1060 6gb and 16gb ram. makes excellent speed improvement
rock on!
Loll..... I was hoping you'd show how the LoRA model connected to the CLIP O_O. Guess not. lol
using a 1660ti and dreamshaper 8, 512x512 images only take 2 seconds to generate. super crazy!
Thank you! Do you have tutorials to change pose without changing details of character as much as posible?
Not yet. That will be a bit of a challenge, but we can probably do it using a few techniques.
Thank you for answering. Yeah to reach that I use mostly inpaint and some combos of AI adapter, posex, and control net a bit helps. Found very cool thing to use right lora over but it seems that it look to similar character to pervious.@@sedetweiler
@@Kavsanv I was leaning on the IPadapter for a lot of that for sure, but I also think a bit of roop in combination with that would also help.
Sometimes it's hard to find the model you're using on Huggingface. Did you rename the downloaded LoRA? Edit: I think I found it. I downloaded pytorch_lora_weights.safetensors
from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match yours.
Yes, sorry. They are all named the same thing. You will always need to rename them. I should have mentioned that, but this is a very consistent thing with the names being generic.
Thanks for the tutorial. I wonder how much amazing stuff is hiding in comfyui and the stable diffusion world that we'd never know about without your videos.
There is a ton! There are also things in AUTO1111 that no one has covered yet that I will probably make videos on as well. So much, and it is constantly evolving!
I'm at the download site crossroads feeling lost: Latent Consistency Models LoRAs vs Latent Consistency Models Weights. I downloaded one of each no wiser, it's not eady being a noob. PS: pytorch_lora_weights.safetensors = 380MB other = 4.5GB My guess it's the smaller.
@@bwheldale Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA.
Even with this LoRa (1.5) it’s taking me 2 minutes to generate a single image. Is my MacBook Pro (8gb ram) actually that bad, or is there something else I could be mission?
Maybe you run it in CPU mode
How do you get the Model Sampling Discrete node to show up? I don't have it
Update comfy.
Where to find an LCM Sampler?
Make sure you are on the latest of all nodes and comfy. It is in the comfy core, so a git pull should get you all you need.
Very clear ! Thanks !
Have you tried it with SD1.5 checkpoint
Great content. What if you make it 30-50 steps. Is it better quality than without this lora or is it just speed boost on low steps?
Nope, it often seems to get worse and it changes a ton as it advances.
Not with SGM_Uniform since it adds constant noise and just keeps changing but I got higher quality on SD15 with 20 steps but with exponential sampler.
For SDXL I had the best results also with exponential around 8 steps so far.
i did exactly what you did but it's taking way longer. Dont know why. Im using RTX4060 (laptop)
I'm having a technical question, in the video you say, cfg values of 1 or below make the sampler ignore the negative prompt. I have no idea if this is true because I usually have no or just a short negative prompt so I could not see a difference in the images.
But what I saw - and this brings me to the technical question: at a cfg value of 1.0 (and only there, neither above nor below 1), the steps took only about 50% to 60% percent of the time as usual with my GPU - so could it be there is something happening exactly at 1.0 that is different from the other values, like it's ignoring the negative prompt only at 1, but not the other values?
And if so, is there a way for people who usually don't use negative prompts, to somehow speed up the rendering by making the K Sampler ignore the negative prompt? Because the speed increase is only at cfg 1, simply leaving the prompt empty at e.g. higher values does not work. Unplugging the negative prompt does not work either because it simply throws an error for an unconnected socket and doesn't start to render.
Maybe this is not even happening on other machines... but if so, I would really like to find a way to deliberately ignore the negative prompt even at higher cfg values (because my images with cfg 1 are most of the time not detailed enough). Or maybe this has nothing to do with ignoring the negative prompt and is just happening because of that specific cfg value?
Incredibly useful, thank you very much, really awesome!
You're very welcome!
Hi Scott, great video, thank you.
When doing "vid2vid" with comfy and animdiff/controlnet...
Do you pass the video frames straight into the ksampler, or do you push empty latents into it?
I'm getting sub par results with the former, and have not tried the latter yet.
I will look into it. I don't do much in the way of video at this time.
@sedetweiler results have gotten better due to some help from the wonderful Coffee Vectors and Purz and others, sd1.5 vid2vid working quite well with lcm and anim diff now, results on my Twitter from last night. Defo still needs some tweaks.
Sdxl vid2vid lcm anim diff is still proving a little more elusive, but I am doing a lot of tests to find the right combo of weights and control nets. Results coming soon.
Thank you for your amazing walkthroughs, you've really helped get me going with comfy in the recent weeks.
Thank you for a great tutorial.
You are welcome!
My ksampler doesn't have the LCM sampler?
Make sure everything is updated. This sampler is less that 24 hours old.
@@sedetweiler Updated all and it's there now, thank you! I was running around 16 steps with DDIM but the quality of LCM is even better quality with 4 steps than it was with DDIM 16 steps. Do you know if quality increases exponentially with LCM or does it max at 4?
It really feels like you skipped 5 steps here, considering that it is not at all clear how to get lcm into the sampler_name or that you renamed the lora. Can you sticky a comment that explains a couple extra steps to get to your initial position?
Yeah he wants a cash grab... does not care about people watching his youtube or why would he do that. Here... Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA. Just rename the downloaded LoRA pytorch_lora_weights.safetensors from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match his.
If you update ComfyUI with the manager, the lcm sampler will be there. When watching tutorials on new features make sure your software is updated...
@@gordonbrinkmannSo I went through ComfyUI_Manager and selected Update ComfyUI. I still don’t have the lcm sampler. It’s a relatively new install altogether, maybe Friday. Any idea what I am doing wrong?
@@whatwherethere I updated it after watching the video and not seeing the lcm sampler, then I restarted ComfyUI - did you restart it? The changes will only be implemented after closing ComfyUI and starting it again. That was all I did, then there was the lcm sampler.
I dont need rename the Lora files. It is optional.
Wait im confused. Does anyone else have the LCM sampler? I dont know how to get that put ii my life but its not there
Make sure you always update before attempting new workflows. It is not even a day old.
I tried it, work fine with still image generation, but when work with animatediff, why the image quality dropped significantly
I have no sampler called 'lcm' - have i missed a step? :(
Yup, this is a day old, so if you are not up-to-date you will not have it.
@@sedetweiler- i discovered! Thanks so much for this one - this is SO fast - reworking tons of workflows now ;)
to install put the file in lora model
Very cool,, Thanks
You bet
Thank you! this is useful for quickly making video frames. I use comfyroll , works well with it (but it's not easy - maybe you could make a tutorial? - see what i did here ;) - great vid as usual.
Great suggestion!
Thank for replying@@sedetweiler . Comfryroll, Rgthree and Trung's 0246 + anything everywhere are my go to nodes right now.
for some reason this lora unloads the checkpoint from memory after every generation so that you have to wait a whole minute for it to load back into memory before the ksampler even starts to do anything. I seen the processes in the background status window so that's how I know. Using sdxl without the lora there is no 60 sec wait for the model to load before the ksampler starts making an image.
Cool video ! Thanks
Glad you liked it!
can you animatediff LCM
Just tried it, works better than i thought!
Yup, should work just fine.
how to enable upscale?
Like many others, no LCM sampler. Tried many updates through Manager with every time a restart, but no luck....
And you are pulling the latest from all of the extensions? These releases are less than a day old, so everything needs to be updated to keep up. Sorry you are unable to find it, but it isn't hidden, something just isn't updating.
You can actually see it here in the comfy code, added 3 days ago. You need to be sure to do a "git pull" on comfy.
github.com/comfyanonymous/ComfyUI/commit/002aefa382585d171aef13c7bd21f64b8664fe28
Nuked pinokio, reinstall and tadaaa!! Working great!
LOL how do you install this LCM?
Does it work with A1111?
Probably in a few weeks.
Amazing thank you
No problem 😊
😛It's...AWESOME !!! Especialy to make videos.
Yeah it is!
I don't see the graph under the member section🤔
Are you sponsor level or higher? You should see if there with all of the other graphs.
Does this work with tensor RT?
Decline in quality or
doesn't work for me. Get out of memory errors with 8gb ram, 4gb 1650 for sdxl. Comfyui normally runs sdxl on this system but really really slow but at least it works.
I'm getting images even worse quality that just using 5 steps normally does.... not sure what's wrong with mine.
Is this just for sdxl models?
You can use it with any model as long as you use the proper LoRA.
Sadly it's slower than a normal 30setps generation on a low end machine. (caused mainly by Lora itself and comfyui sucks at loading lora before ksampler)
512x512 - 4steps - 1image = 115 seconds. ~7s/it (30 steps normaly takes ~20 seconds - less than 1s/it)
512x768 - 5steps - 1iamge = 130 seconds. ~4s/it (30 steps normaly takes ~35 seconds - less than 2s/it)
644x1000 - 5steps - 1image = 161 seconds. ~6s/it (30 steps normaly takes ~50 seconds - less than 3s/it)
Damn! Just a month away and world of AI has chnaged upside down
hi, thanks, anybody know if the LCM lora will lead to a low quality of images? because it is too fast so l am totally confused and can't help stop thinking its quality? anyone?
Yes, it does take a bit of a hit, but it is just different, perhaps not lower quality.
Don't forget to install WAS Node Suite and LCM Sampler with the manager before trying to build this workflow. Also, your ModelSamplingDiscrete seems mostly useless, I have 3 loras chained and adding your MSD node makes no remarkable difference or improvement whatsoever.
Yes, those nodes are critical and I actually feel they should be part of the base product they are so good.
👋
🍻
Not instructive at all, was very lost
don't attempt if you don't understand HuggingFace. Love how he assumes you know how to DL and install.
if downloading from HuggingFace is where you get stuck, you should prolly take a couple steps back before you delve into running all of this locally.