You’re doing awesome work! I am not only able to learn all this stuff at lightning speed because of videos like yours but we are creating something beautiful as a community. Keep it up 🎉
@@sebastiankamph I really hate such people, he explains ordinary things and trying to sell it like a top secret and something extraordinary. Of course most of people knows about xformers option and how to activate it. An ordinary a$$h0le, who tryies to get more viewers for his channel.
--xformers FTW! NOTE: Visions of Chaos has a checkbox to enable/disable xformers for Automatic1111 WebUI before starting it up... It also has a word of caution: "When xformers is enabled image generation can be faster, but can also lead to image generation no longer being deterministic. That means if you generate an image with the exact same settings as before the new image may be slightly different."
MAC USERS: No, the graphics cards in MacBook Pros are not user-replaceable or upgradable. They are soldered onto the motherboard, and there is no way to swap them out for different models like you can with some desktop PCs. Furthermore, Apple does not support Nvidia graphics cards in its newer MacBook models due to driver support and compatibility issues between Apple and Nvidia. Apple primarily uses integrated Intel graphics or dedicated AMD graphics in their laptops. If you require Nvidia graphics for specific applications, you might consider using an external GPU (eGPU) setup with a compatible Thunderbolt 3 enclosure and an Nvidia card, provided you have the necessary software drivers that support MacOS. However, keep in mind that Apple's support for Nvidia cards can be limited, and you should check for compatibility before making any purchases.
"When xformers is enabled image generation can be faster, but can also lead to image generation no longer being deterministic. That means if you generate an image with the exact same settings as before the new image may be slightly different."
That's not accurate, it's actually deterministic within your environment. Also not sure this is a past bug because I haven't been able to reproduce myself on multiple environments.
@@knoopx maybe it's been resolved, but last I knew it was left as a consequence of the increased parallelism used by xformers which contributes to the faster generation. When it was an issue, I definitely noticed it and was one of the first people to alert others in various discord channels about something breaking determinate results. At the time, I didn't realize it was xformers. It was made worse if one used the AND feature in automatic1111's repo. Xformers was patched a couple times since then, but to my knowledge, which is possibly outdated, the breaking of perfect determinism was never fully resolved. The speed boost, however, is significant enough for me that I stopped caring about it and decided if the art I produce isn't perfectly reproducible, it's actually a little more valuable.
@@Aeloi Насколько вы бы оценили "ухудшение картинки" по сравнению с точностью ? Новичку это может помешать? Я пока учусь формулировать подсказки и всё ещё получаю странные результаты без ускорителей.
@@---Nikita-- xformers shouldn't have any effect on quality.. Though at first it reportedly did. It just makes it such that with the same set of parameters and prompt, you can click generate multiple times and get slightly different results each time
You will also get a noticeable speedup by increasing the batch size (to about 4, for a larger the performance benefit starts to flat out) and by setting the "Show image creation progress every N sampling steps" option to -1. The former increases VRAM demands, and the latter decreases.
If you're running old python 3.9.x I'd recommend reinstalling A1111 after uninstalling 3.9.x and installing 3.10.x. I had some problems when trying to just update python since it kept the old version as well and the venv-version was different.
If anyone has this issue, just get the correct version of python installed(ideally removing all others to be certain) with the add to path option checked from the first page of the installer, then delete your venv folder in SD and start your webui. It will rebuild your venv folder with the current and correct version of python.
RTX 3090 reference/founders, slight underclock of 100 (for overall temps and power draw lowering), improved my testing run by 61%. Settings for test was LMS @ 150 sample steps, 1024x1024, 38/75 prompt size used. Something else interesting to note, the first test without xformers used around 18.5gb vram, and the test with xformers on only used 10.2gb.
So I often get an error message on SD when I have undervolted my graphics card. It runs stably in games. It also says something about graphics card cuda manipulation.
THANK YOU! I didn't know anything about this, I have series 3 RTX so it worked for me. Only reason I even found this video was because I kept seeing xformers missing when client started lol.
Don't do this with a 900 series card or below it will brick your installation, you'll need to build xformers manually ! Also don't use the live preview thing for images, it slows down generation speed quite a bit, very noticeable on older cards.
@@neeil634go to settings, then on the left side closer towards the bottom you will see Live previews and then when you click on it you will see on the left Show live previews of the created images. Hope this helps!
Thanks for the straightforward tutorial. I've seen other channels showing how you need to install and edit a whole bunch of programs in order to enable xformers. Does that make a difference?🤔
i still don't understand how the generate speed work, i have about 87it/s but it takes about 2 hour to generate one images with 720*1024 size, can you help me?
I didn't know about this until I installed Deforum with 6GB vram. Everything runs way better, and I can create 768x1024 images on still images also. It's a really essential tip.
Great stuff! I was struggling and downloading libraries and and getting files I didn't need, and this solved it in seconds, definitely earned a like! I found my progress on an MSI 2070 ARMOR went from 3.78 with no preview to 4.54 it/s with preview every 0 steps. With no steps, I reached 4.67 at peak.
I'm getting those three lines fairly early on: no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. So what's wrong?
Just errors, have to start it with --disable-nan-check now for it to work at all. If I use both --xformers and --disable-nan-check i get black pictures. i5-10400F, 16GB ram, GTX 1660S 6GB, gpu driver version 31.0.15.3130
Why when I put prompts to create an imagine in Stable Difussion, it takes time to create but when it reaches 100% , it just dissapear and I don't even see it
Could you do a video on hypernetworks? I'd like to be able to add images of myself to generate but don't want to do a full dreambooth model to have to merge with more checkpoints im using.
There's a Dreambooth extension for Automatic1111 WebUI that you can enable. Here's an old video on hypernetworks: th-cam.com/video/1mEggRgRgfg/w-d-xo.html
I guess performance depends on many variables. I tested RTX 4090 using (version: v1.7.0 • python: 3.10.10 • torch: 2.0.1+cu118) on Windows 11, Chrome browser. SDXL model, 1216 x 832 image generation (Batch size = 4) and here are my results: (no arguments) [00:44
Awesome. Testing now with my 4090. BEFORE: Batch: 4, Batch Size: 1 = ~6.2it/s & 00:38 to complete & 9.5 secs per image Batch: 6, Batch Size: 6 = ~3.1it/s & 01:59 to complete & 3.3 secs per image (This is my usual batch options for creating heaps to give myself options) AFTER: Batch: 4, Batch Size: 1 = 12.2it/s & 00:21 to complete & 5.2 secs per image Batch: 6, Batch Size: 6 = ~4.7it/s & 01:25 to complete & 2.3 secs per image Pretty huge difference. Over long periods, this will save me a lot of time. Really awesome. Thanks! And for anyone interested, here's the test settings: Steps: 50, Sampler: LMS, CFG scale: 7, Seed: 1908815469, Size: 512x512, Model hash: 44bf0551
Guys if you notice that is decreased the value, make sure it is not s/it instead of it/s I was confused because I thought it was worth but in reality, my system is so slow, that it cannot make 1 it less than a second, that's why it is s/it instead of it/s =)
I keep getting an error when I try to increase my batch count, "modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check." Other than that I think it helped, thanks
Hello Sebastian Kamph, I like your video, you help me so much, to make better art. Thank you for that!!! :) I got a question, I got a Ryzeb 5 5600 with 6 cores, 32 GB Ram and a Nvidia GForce RTX 3060, I used your xformers change. But I only get 1.5-2 It per seconds. Is there anything I can do, to get more Iterations per second??? Thank you, in advance, for your answer. :) Keep up your great work! :)
There's a few things you can do to speed it up, but not alot except xformers. You can install torch2 (or try Vlad diffusion) and see if that runs faster for you. Also try faster samplers like euler_a and ddim. Keep the resolutions lower and then only go higher when you found a composition you like.
@@sebastiankamph Hello Sebastian! Thank you very much for your answer! :) I appreciate it a lot! :)I will try the things you mentioned. And give you the results. :) Wish you a nice weekend! :)
@@loveatomb Hello Visquint, I looked up my Stable Diffusion installation, and the stuff was already installed. So I used his tips with lower resolution and the sampler. That fasten up my work process. :)
nothing is changing when i add --xformers to the file. I don't get the "running with arguements: --xformers" either it is just empty space after 'arguments:' am i missing something?
My 4070 suddenly got a random performance drop and only generated highrex fix images at 10 seconds per iteration. Thanks to xformers it now generates the highres images at 5it/s but I am still wondering my it is so slow. Any further help is much appreciated Edit 3: Normal images generate at 1it/s Edit 1: I don't have any info on the speed before but it generated the images probably twice as fast. Also using an I7 12th gen but I didn't change that Edit 2: I belive it had gotten slower after a git pull
Confirmation that it didn't do anything with my Quadro M4000 (Maxwell). I also have a Testla P4 (Pascal) installed, but I don't know how to tell SD about it. Blender sometimes recognizes it, but after running automatic1111 it usually disappears as an option.
This gave me like .3 as a GTX 1080TI user 👍 can't believe I'm upscaling rendering giant deforum animations at this speed and having a great time then I see this video and HOURS OF MY LIFE GONE FOREVER inside of a weeks time. Used Amazon RTX is a new muuuuuust for me 🐢
xformer giving error of python 3.10.6 version and torch 1.12.1+cu113 version its tested on higher like 3.10.9 and 1.13.1+cu117 what to do i dont want to install that and ruin the setup without knowing it will work or not!
Thanks very much, I did edit my bat file and I did see an improvement in performance, however it interfered with the Blender plugin for Stable Diffusion, causing it to only run once as the bat file line "set COMMANDLINE_ARGS=--api " already has it set to =-api. I tried adding "set COMMANDLINE_ARGS=--api --xformers" which did load xformers but caused the issue with the Blender plugin.
Hello Sebastian, I really appreciate your videos. I wanted to share with you my current bitter problem regarding graphics card upgrades. I finally purchased an RTX 3060 12g to replace a GTX 1050ti 4g. After running all the tests, the second thing I do is to remove the midvram entry, leaving the --xformers --opt-split-attention entries. The prompt is definitely improved but the results are crying. It doesn't read the prompt, gives me only deformed, multiple artefacts, and above all with the same info settings used in previous jobs it brings out almost cardboard monsters. It almost seems as if the negatives read them as positives. I read on Reddit that this has already happened to someone and that someone in desperation put the old card back in. Can you give me some information on this? Have any of you recently upgraded? Thank you very much. (I'm desperate)
Hello, recently whenever I try to train a model in Stable Diffusion Dreambooth I get this error. "import of xformers halted; None in sys.modules." but I'm unsure how to fix was wondering if anyone could help? Xformers has been installed and has worked as I've trained successfully in the past. I also have --xformers in the command line in the webui-user.bat file. Any help would be appreciated.
Hii! i am running stable diffusion on my laptop and it is super laggy... my whole system just lags and freezes... laptop specs : 8GB RAM GPU : nvidia gtx geforce 1650ti CPU : intel core i7 10th gen also, i was running it using python 3.11.4 how do i downgrade it without anything else getting affected? thanks to anyone that answers
Normal SD images take a few seconds, but when I do SDXL images, they sometimes take like 15 freakin minutes? Other times with the same parameters SDXL only takes like 3-4 minutes? It randomly speeds up and slows down lol its very annoying. I'm using an RTX 4070 8GB and 32 GB sys RAM. *Edit:* I fixed it. I added --xformers to my parameters in the SD loading file. Then ticked xformers in optimizations...boom reloaded everything and what took 20 minutes is now down to 45 seconds to 2 minutes! awesome.
I think you can, but I don't think you'll need it. Xformers uses so much less VRAM that I'm able to increase the batch size to 8 on 512x512, or generate a 2048x2048 image on my RTX 3080 10GB.
@@wolfai_ I don't think the images necessarily worse. When comparing xformers with the original, the images have very slight differences but it doesn't really make the image look worse - more like looks slightly different.
@@KillFrenzy96 Alrighty. Ive known medvram for awhile but I never use it because someone said it reduces the overall quality. I forget where I read it, will try it maybe
Thanks for YET another amazing tip. You are my go-to. Tried this on my 1070 w/ 8GB. First run averaged about 1.15 (LOL.) Unfortunately got an 'error completing request' so it doesn't work for me. Going to get that 3080ti....
🙃Gtx1070 is pretty old by now... 2016 was a long time ago; 8 GB of memory, a lack of Tensor cores... your bottlenecks sound like hardware/budget, ML tasks are intensive.
@@kayteetunna 6GB of GDDR6 on the card. When I bought the card back in 2020, I wasn't aware of AI art programs so I went with what I could afford and gave reasonable gaming performance. Any GPU suggestions?
The FREE Prompt styles I use here:
www.patreon.com/posts/sebs-hilis-79649068
You’re doing awesome work! I am not only able to learn all this stuff at lightning speed because of videos like yours but we are creating something beautiful as a community.
Keep it up 🎉
Thank you for the kind words. Truly appreciated! 😊🌟
@@sebastiankamph I really hate such people, he explains ordinary things and trying to sell it like a top secret and something extraordinary. Of course most of people knows about xformers option and how to activate it. An ordinary a$$h0le, who tryies to get more viewers for his channel.
i just subcribed because of your comment 🎉
--xformers FTW!
NOTE: Visions of Chaos has a checkbox to enable/disable xformers for Automatic1111 WebUI before starting it up...
It also has a word of caution:
"When xformers is enabled image generation can be faster, but can also lead to image generation no longer being deterministic.
That means if you generate an image with the exact same settings as before the new image may be slightly different."
I was checking out VoC but really need to delve deeper. Thanks for reminding me!
What is "Visions of Chaos" i dont know? I'm interested 🤔👀
@@TheAiConqueror I think that's a fractal generator, visualisations based on Chaos Theory and Machine Learning
@@TheAiConqueror He's being a twat and not speaking plain English so those of us who don't play Dark Souls feel left out.
MAC USERS:
No, the graphics cards in MacBook Pros are not user-replaceable or upgradable. They are soldered onto the motherboard, and there is no way to swap them out for different models like you can with some desktop PCs.
Furthermore, Apple does not support Nvidia graphics cards in its newer MacBook models due to driver support and compatibility issues between Apple and Nvidia. Apple primarily uses integrated Intel graphics or dedicated AMD graphics in their laptops.
If you require Nvidia graphics for specific applications, you might consider using an external GPU (eGPU) setup with a compatible Thunderbolt 3 enclosure and an Nvidia card, provided you have the necessary software drivers that support MacOS. However, keep in mind that Apple's support for Nvidia cards can be limited, and you should check for compatibility before making any purchases.
"When xformers is enabled image generation can be faster, but can also lead to image generation no longer being deterministic.
That means if you generate an image with the exact same settings as before the new image may be slightly different."
That's not accurate, it's actually deterministic within your environment. Also not sure this is a past bug because I haven't been able to reproduce myself on multiple environments.
@@knoopx maybe it's been resolved, but last I knew it was left as a consequence of the increased parallelism used by xformers which contributes to the faster generation. When it was an issue, I definitely noticed it and was one of the first people to alert others in various discord channels about something breaking determinate results. At the time, I didn't realize it was xformers. It was made worse if one used the AND feature in automatic1111's repo. Xformers was patched a couple times since then, but to my knowledge, which is possibly outdated, the breaking of perfect determinism was never fully resolved. The speed boost, however, is significant enough for me that I stopped caring about it and decided if the art I produce isn't perfectly reproducible, it's actually a little more valuable.
@@Aeloi Насколько вы бы оценили "ухудшение картинки" по сравнению с точностью ? Новичку это может помешать? Я пока учусь формулировать подсказки и всё ещё получаю странные результаты без ускорителей.
@@---Nikita-- xformers shouldn't have any effect on quality.. Though at first it reportedly did. It just makes it such that with the same set of parameters and prompt, you can click generate multiple times and get slightly different results each time
It worked the first time I ran it but now I open it again and it says "No module 'Xformers". Proceeding without it... I'm confused.
You will also get a noticeable speedup by increasing the batch size (to about 4, for a larger the performance benefit starts to flat out) and by setting the "Show image creation progress every N sampling steps" option to -1. The former increases VRAM demands, and the latter decreases.
Good tips! Yes, I usually use batch count to make the vram dump between each image.
thanks mate appreciate it
Super helpful thanks! Enjoying your videos (and dad jokes!) ✨
Thank you very much! You're a real superstar, you! 🤩
Thanks so much for taking the time to make the video! I personally prefer deterministic results, but this could speed up times for a lot of people.
What does deterministic mean in this context? Are you saying it's technically lower quality or it might negatively affect the results in some way?
It means it can reduce quality slightly and makes it harder to reproduce an image even with the same prompts used.@@LoneChipmunk
@@NineSeptims Does it matter if I don't care about reproducability? How much of a quality dip are we talking?
the quality dip is noticed at higher resolutions, its around a 10% drop@@LoneChipmunk
If you're running old python 3.9.x I'd recommend reinstalling A1111 after uninstalling 3.9.x and installing 3.10.x. I had some problems when trying to just update python since it kept the old version as well and the venv-version was different.
Good informative comment that I hope can help someone else out there 🌟
If anyone has this issue, just get the correct version of python installed(ideally removing all others to be certain) with the add to path option checked from the first page of the installer, then delete your venv folder in SD and start your webui. It will rebuild your venv folder with the current and correct version of python.
thanks for clear guides
RTX 3090 reference/founders, slight underclock of 100 (for overall temps and power draw lowering), improved my testing run by 61%. Settings for test was LMS @ 150 sample steps, 1024x1024, 38/75 prompt size used. Something else interesting to note, the first test without xformers used around 18.5gb vram, and the test with xformers on only used 10.2gb.
Those are some great results!
@@sebastiankamph yeah it's been great. Really helping save time with hypernetwork learning and outpainting.
Wats your vram?
So I often get an error message on SD when I have undervolted my graphics card. It runs stably in games. It also says something about graphics card cuda manipulation.
@@kayteetunna 3090 has 24GB
I'm on a 3070 and this sped up my Deforum videos from 12 hours to 1 hour.
this also helped me fix my cuda memory errors i've been getting, thanks!
THANK YOU! I didn't know anything about this, I have series 3 RTX so it worked for me. Only reason I even found this video was because I kept seeing xformers missing when client started lol.
Glad I could help!
Don't do this with a 900 series card or below it will brick your installation, you'll need to build xformers manually !
Also don't use the live preview thing for images, it slows down generation speed quite a bit, very noticeable on older cards.
@@neeil634go to settings, then on the left side closer towards the bottom you will see Live previews and then when you click on it you will see on the left Show live previews of the created images. Hope this helps!
im try to istall xformers on mi mac m1 , but i haven't been able, is there a page or a tutorial for macos users to istall xformers?
tell me the settings for the rtx 3060 12gig webui/bat video card/// thnx bro
Thanks for the straightforward tutorial. I've seen other channels showing how you need to install and edit a whole bunch of programs in order to enable xformers. Does that make a difference?🤔
I am getting "No module 'xformers'. Proceeding without it." after adding cmd argument?
i still don't understand how the generate speed work, i have about 87it/s but it takes about 2 hour to generate one images with 720*1024 size, can you help me?
Would this work on an M1 Mac?
Great and easy info vid, thanks 👍
Happy to help!
I didn't know about this until I installed Deforum with 6GB vram. Everything runs way better, and I can create 768x1024 images on still images also. It's a really essential tip.
Great stuff! I was struggling and downloading libraries and and getting files I didn't need, and this solved it in seconds, definitely earned a like!
I found my progress on an MSI 2070 ARMOR went from 3.78 with no preview to 4.54 it/s with preview every 0 steps. With no steps, I reached 4.67 at peak.
I'm getting those three lines fairly early on:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
So what's wrong?
found a fix yet? :)
Just errors, have to start it with --disable-nan-check now for it to work at all.
If I use both --xformers and --disable-nan-check i get black pictures.
i5-10400F, 16GB ram, GTX 1660S 6GB, gpu driver version 31.0.15.3130
It was fine with me. Then now with sdxl 1 it takes me 10 min to image. Yet I have a gtx3050 8gb and 32di ram
Thank you so much. was taking forever for 4 images
Glad it helped!
thanks you very much from my deepest heart
wow u weren't lying there's a major difference now
Why when I put prompts to create an imagine in Stable Difussion, it takes time to create but when it reaches 100% , it just dissapear and I don't even see it
I have read that because of the VRAM , if it is full, you will not be able to see the pictures
I was wondering what that little xformers warning at the start meant. Thanks for the tip!
Thank you, I saw a 50% improvement on my 3090
Could you do a video on hypernetworks? I'd like to be able to add images of myself to generate but don't want to do a full dreambooth model to have to merge with more checkpoints im using.
There's a Dreambooth extension for Automatic1111 WebUI that you can enable.
Here's an old video on hypernetworks: th-cam.com/video/1mEggRgRgfg/w-d-xo.html
Seb also has this video on setting up the Dreambooth extension:
th-cam.com/video/_GmGnMO8aGs/w-d-xo.html
How is this gona work on amd radoen rx 6650m(8gb)?
faster than flash!!! 🏃🏻💨
So fast! ⚡⚡⚡ Thank you so much for your support my friend! 😊😍
Already activated in the colab version of dreambooth automatic1111 and others.
can this be done with easy diffusion and if so where and how 🙏
Any chances this tips on Mac M1 .. ?
Thank you very very much!
I guess performance depends on many variables.
I tested RTX 4090 using (version: v1.7.0 • python: 3.10.10 • torch: 2.0.1+cu118) on Windows 11, Chrome browser.
SDXL model, 1216 x 832 image generation (Batch size = 4) and here are my results:
(no arguments)
[00:44
THANK YOU SEB!!! 😍
Happy to help!
Why did you drag the words into the notepad, even though there are already these words, but without a word
XFormers
Super thnx Sebastian
You're very welcome! 🌟
So if this is so much better, why isn't it integrated into the generation process by default?
Awesome. Testing now with my 4090.
BEFORE:
Batch: 4, Batch Size: 1 = ~6.2it/s & 00:38 to complete & 9.5 secs per image
Batch: 6, Batch Size: 6 = ~3.1it/s & 01:59 to complete & 3.3 secs per image (This is my usual batch options for creating heaps to give myself options)
AFTER:
Batch: 4, Batch Size: 1 = 12.2it/s & 00:21 to complete & 5.2 secs per image
Batch: 6, Batch Size: 6 = ~4.7it/s & 01:25 to complete & 2.3 secs per image
Pretty huge difference. Over long periods, this will save me a lot of time. Really awesome. Thanks!
And for anyone interested, here's the test settings:
Steps: 50, Sampler: LMS, CFG scale: 7, Seed: 1908815469, Size: 512x512, Model hash: 44bf0551
Glad it helped you and thanks for posting results.
You should be getting way more with a 4090! Like, a lot more! Have you updated the cuDNN files?
I'm getting around 25-30 it/s with a 4080...
But what about Easy Diffusion?
I don't see web-ui for me.
Guys if you notice that is decreased the value, make sure it is not s/it instead of it/s
I was confused because I thought it was worth but in reality, my system is so slow, that it cannot make 1 it less than a second, that's why it is s/it instead of it/s =)
Pls tell me there is a way to cure out of memory errors or to refresh it all without having to do a system restart?
I keep getting an error when I try to increase my batch count, "modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."
Other than that I think it helped, thanks
Gained a solid .6 it/s on a 3060 12g. Thanks for the tip
I am getting a entry point not found error
hi, I have an RTX 4070 OC, what settings should I enter?
Homie your videos are the best, I started an ig acct to share deforum vids!
Legend! Join us in Discord and share them in our art-share channel. See link in channel description. 🌟
Thank you!
Does this also work with fooocus instead of A1111? I can't find a file like that in the fooocus folders.
(with random settings on 3080) From 11,5, adding the --xformers, now it is 13.8 it/s. It worked :) :) :)
Boom! 💥
How do you get more than 5 or 6 it/s...or even 13.8??? How is that even possible???😵😵😵
weird mine says no module 'xforemers' proceeding without it
Hello Sebastian Kamph, I like your video, you help me so much, to make better art. Thank you for that!!! :)
I got a question, I got a Ryzeb 5 5600 with 6 cores, 32 GB Ram and a Nvidia GForce RTX 3060, I used your xformers change. But I only get 1.5-2 It per seconds. Is there anything I can do, to get more Iterations per second???
Thank you, in advance, for your answer. :)
Keep up your great work! :)
There's a few things you can do to speed it up, but not alot except xformers. You can install torch2 (or try Vlad diffusion) and see if that runs faster for you. Also try faster samplers like euler_a and ddim. Keep the resolutions lower and then only go higher when you found a composition you like.
@@sebastiankamph Hello Sebastian! Thank you very much for your answer! :) I appreciate it a lot! :)I will try the things you mentioned. And give you the results. :) Wish you a nice weekend! :)
@@loveatomb Hello Visquint, I looked up my Stable Diffusion installation, and the stuff was already installed. So I used his tips with lower resolution and the sampler. That fasten up my work process. :)
nothing is changing when i add --xformers to the file. I don't get the "running with arguements: --xformers" either it is just empty space after 'arguments:' am i missing something?
Wow! Speeding up renders will be awesome for me since my card isn’t great
@@__________HolySpirit__________ nvidia 1660
My 4070 suddenly got a random performance drop and only generated highrex fix images at 10 seconds per iteration. Thanks to xformers it now generates the highres images at 5it/s but I am still wondering my it is so slow. Any further help is much appreciated
Edit 3: Normal images generate at 1it/s
Edit 1: I don't have any info on the speed before but it generated the images probably twice as fast. Also using an I7 12th gen but I didn't change that
Edit 2: I belive it had gotten slower after a git pull
Confirmation that it didn't do anything with my Quadro M4000 (Maxwell). I also have a Testla P4 (Pascal) installed, but I don't know how to tell SD about it. Blender sometimes recognizes it, but after running automatic1111 it usually disappears as an option.
Could you by chance make a quick step by step instruction to add a Dark Theme to Stable Diffusion GUI? 🙏
doest not work on my rog flow x 13 1060 series any suggest
Ok this is impressive af
Vroooooom!
my set COMMANDLINE_ARGS already have =--skip-torch-cuda-test --precision full --no-half
where should i add the xformers ?
Before, after, in between. Doesn't matter
@@sebastiankamph thanks Seb, i tried but gave me an error message, i gave up.
Do we need Cuda? Thank you
This gave me like .3 as a GTX 1080TI user 👍 can't believe I'm upscaling rendering giant deforum animations at this speed and having a great time then I see this video and HOURS OF MY LIFE GONE FOREVER inside of a weeks time. Used Amazon RTX is a new muuuuuust for me 🐢
xformer giving error of python 3.10.6 version and torch 1.12.1+cu113 version its tested on higher like 3.10.9 and 1.13.1+cu117 what to do i dont want to install that and ruin the setup without knowing it will work or not!
Nice!!! 1.02 -> 1.42it/s on GTX 1080: ~40% faster!!! Big Thx!
Thanks very much, I did edit my bat file and I did see an improvement in performance, however it interfered with the Blender plugin for Stable Diffusion, causing it to only run once as the bat file line "set COMMANDLINE_ARGS=--api " already has it set to =-api. I tried adding "set COMMANDLINE_ARGS=--api --xformers" which did load xformers but caused the issue with the Blender plugin.
did u got an solution?
Ack, sadly Python 3.10 refuses to install, which is why I am stuck on 3.8. I've been watching to use xformers since it was added.
Can we do hardware acceleration for this? TensorRT?
Hello Sebastian, I really appreciate your videos. I wanted to share with you my current bitter problem regarding graphics card upgrades. I finally purchased an RTX 3060 12g to replace a GTX 1050ti 4g. After running all the tests, the second thing I do is to remove the midvram entry, leaving the --xformers --opt-split-attention entries. The prompt is definitely improved but the results are crying. It doesn't read the prompt, gives me only deformed, multiple artefacts, and above all with the same info settings used in previous jobs it brings out almost cardboard monsters. It almost seems as if the negatives read them as positives. I read on Reddit that this has already happened to someone and that someone in desperation put the old card back in. Can you give me some information on this? Have any of you recently upgraded? Thank you very much. (I'm desperate)
does it work with 1600 series cards?
Thank you. 1080Ti still seems to top out at 2.82 it/s.
At least you'll get lower vram usage 🌟
How to get dark background?
Hello, recently whenever I try to train a model in Stable Diffusion Dreambooth I get this error. "import of xformers halted; None in sys.modules." but I'm unsure how to fix was wondering if anyone could help?
Xformers has been installed and has worked as I've trained successfully in the past. I also have --xformers in the command line in the webui-user.bat file.
Any help would be appreciated.
If A111 is set to automatically update should this already be installed? Why does it work so well? Is it lossless?
It's black magic. It just works.
THANKS!
Hii! i am running stable diffusion on my laptop and it is super laggy... my whole system just lags and freezes...
laptop specs :
8GB RAM
GPU : nvidia gtx geforce 1650ti
CPU : intel core i7 10th gen
also, i was running it using python 3.11.4 how do i downgrade it without anything else getting affected?
thanks to anyone that answers
How do I uninstall it? I forgot to make the comparison of the before and after
Just remove --xformers in the file and restart.
@@sebastiankamph ah, that was simpler than I thought, thanks :D
Stable Diffusion fails to start after xformers are installed (GTX 1660 Ti)
For me it's the same on a GTX 1080 with 8GB VRAM, but it does save on VRAM
On a personal bench i came from 2.20s/it to 1.93s/it on gtx 1080
sometimes we just got to take our computers apart and upgrade. I'm waiting for two 16 rams to come in the mail after saving up some money
Normal SD images take a few seconds, but when I do SDXL images, they sometimes take like 15 freakin minutes? Other times with the same parameters SDXL only takes like 3-4 minutes? It randomly speeds up and slows down lol its very annoying. I'm using an RTX 4070 8GB and 32 GB sys RAM. *Edit:* I fixed it. I added --xformers to my parameters in the SD loading file. Then ticked xformers in optimizations...boom reloaded everything and what took 20 minutes is now down to 45 seconds to 2 minutes! awesome.
can you tell me if putting the --medvram or --lowvram commands can decrease the quality of the images?
I don't think so, these commands are for people who don't have 8gbs of vram or more (like me lol)
It only affect speed negatively (medvram) not by a lot. But (lowvram) by a lot.
hoping this helps with my 11gb gpu, was bottling out saying ran out of vram
Will help a bit for sure
From 3:40 to 2:08 with a 4090. Thanks! 💁🏻♂👑
Why my 3080 give me only 5.71it/s? With xformers
Depends on the sampler you use
Can this combined with --medvram?
I think you can, but I don't think you'll need it. Xformers uses so much less VRAM that I'm able to increase the batch size to 8 on 512x512, or generate a 2048x2048 image on my RTX 3080 10GB.
@@KillFrenzy96 Ah is that so, thanks. Is there any degradation in the finished processed?
@@wolfai_ I don't think the images necessarily worse. When comparing xformers with the original, the images have very slight differences but it doesn't really make the image look worse - more like looks slightly different.
@@KillFrenzy96 Alrighty. Ive known medvram for awhile but I never use it because someone said it reduces the overall quality. I forget where I read it, will try it maybe
Does this actually reduce VRAM usage?
Yes
Thanks for YET another amazing tip. You are my go-to. Tried this on my 1070 w/ 8GB. First run averaged about 1.15 (LOL.) Unfortunately got an 'error completing request' so it doesn't work for me. Going to get that 3080ti....
Glad to have you around! 🥰 3080ti will serve you well
Got the 3080ti. Just over 7 w/o xformers, just under 9 with. I'll take it! Thanks again :)
Nice trick for new people ..though the xformers parameter is pretty old by now 😹👍
Btw..not doing really much of gain on a Gtx1070
🙃Gtx1070 is pretty old by now... 2016 was a long time ago;
8 GB of memory, a lack of Tensor cores... your bottlenecks sound like hardware/budget, ML tasks are intensive.
Got a NVIDIA GeForce GTX 1660 Super, Renders at 1.10it/s. It's a tiny improvement for me (I don't recall what the numbers are before hand.
It's something! 😅
Wats your vram?
@@kayteetunna 6GB of GDDR6 on the card. When I bought the card back in 2020, I wasn't aware of AI art programs so I went with what I could afford and gave reasonable gaming performance. Any GPU suggestions?
Going from 2.7it/s to 4.7is/s on my RTX3070. Insane
it dont work with 3.11.4
does this work with 1060 3gb? my card kinda old 😅
You need to run it with lowvram settings and maybe half the render resolution. But it might work.
Dont expect large or high res pictures tho😅
says no module xformers?