about 20% - 25% worse than ROCm on linux I would say...but has all the normal features of automatic without any ONNX or Olive stuff that were very irritating.
@@FE-Engineer Ig its better than having to the all the optimization stuff over and over again, also likely a lot less space is consumed compared to duel booting linux
@@FE-Engineer I used ventoy to run linux and i don't see where people say its easier to install and use, i had a hard time trying to get SD to even run.
I just bought a new card and once again I am back to your tutorials. Your videos helped me before, your tutorials are extremely up to date and easy to follow. Thanks man, you're doing a great job here!
I “sidegraded” from an RTX 3070 to an RX 6800. Mainly did it because I wanted that extra VRAM and I found a really good deal. Thank you for this tutorial! Very well put together
THANKS A LOT MATE! This is so awesome. I have played with directML and its settings before like hell. My command webui-user.bat argument lines were almost one A4 page. i noticed that you need to restart your PC to get new PATH directions to work on WIN11. Without restart you end up getting "failed to load zluda path automatically" and "use skip-cuda-torch-test" info. Also first install will download cublas64_12 and cusparse64_12 instead of 64_11 without using --use-zluda argument with user.bat.Idk why. My 6750XT results: 1. 1.5 SD models: txt2img 1024x1024: 3.75s/it /average and 1:05min generation time. SDXL models: txt2img 1024x1024: 3.50s/it average and 1:10 minutes. NOTE: without zluda this was impossible task because instant memory error. and SDXL models generated over 2 minutes with 512x512 resolution. 2. Memoryusage is now calibrated. With zluda SD using only 10.2gb/12Gb memory and it will free up memory after generation. 15min 1024x1024 -> 2048 upscaling did not encounter memory error. With directML you cant use more than 1.5x upscale and controlnet. No you dont need a control net with zluda. This is awesome. 3. ControlNet works just fine 4. Ultimate Upscaler works normally 5. Inpaint works normally AMD pro drivers are slight faster than adrealine version. There is sligh 5-15s delay with adrealine when press "generate" and no delay with ProDrivers. IDK what cause this.
How it worked for you.. i've got Cannot read C:\Program Files\AMD\ROCm\6.1\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 rx6700xt, with no working workaround on that shlt
Very nice tutorial. I've moved onto the NVidia side, but your tutorials were extremely helpful with setting up SD with Olive when I was still using my RX 7800XT. Thank you for making it easier for AMD users stuck in windows who are curious about trying SD.
This goes back and forth. About a year ago price / performance was on the side of amd mostly but due to continued improvements now nvidia likely has an edge if you can get a good price for like a 3080 or even maybe a 4070 super. With AMD. Yes. Linux will give you better performance 99% of the time because full ROCm.
OMG! I can't believe this worked! I'm running this on a 7800XT with no issues. One thing to note though, this only worked with Python Version 3.10.6. And also, for anyone not following FE-Engineers file location and structure, you can run CMD from the address of the file explorer window, just navigate there and type in "cmd" in the address bar and command prompt will open at that directory, made things a bit easier for me.
This was a lifesaver! Fiddled 2 days to get Olive ONNX etc working at at least useable level, and after installing zluda using this tutorial (almost) all works out of box without constant tinkering. Inpaint sketch does not work proper (renders whole image instead mask area) but it is prob -directml fork issue
Overall if users don’t want to go Linux and for real rocm. And until complete rocm is on windows. I think zluda is an excellent compromise that still provides tons of functionality for folks in windows. Thanks for watching!
Thank you so much for this tutorial! Haven't even finished the entire video yet but already started generating, even without replacing the files (which I did anyways, didn't seem to affect speed). Getting around 20-25 it/s which seems great! 7900XTX sure is a beast of a card!
@@Torva01 sounds like if you've ❌ on HIP SDK it's about 3 times slower than Linux ROCM, atleast according to one test with 6700 XT. Safe to say it'd be memory efficient regardless and I'm tempted to try on 6700 but I've to check if it's useful for something else like DLSS maybe because that speed gain is not worth it alone.
@@matthewfuller9760 I've tested it. It's about same speed as shark/vulkan but it didn't do much to help VRAM usage. Well, it consumes less than directML but falls apart when you try to upres on sdnext.
Thanks a lot for the tutorial. I could not for the life of me get it to work on Fedora and finally this works really well. I moved from a RTX 2060 to anew 7900XT recently and I was getting 1.5x 2x performance on Comfyui but with this I get at last x5 x6 speed when generating with XL Models.
Hi there, I'm looking at a rx6800 and so just to ask you're quite satisfied with the performance and capabilities of your 7900XT as opposed to the 2060? I have an rx5700 which I am really happy with though for the AI I need more Vram....
@@CapaUno1322 yes definitively. With the 20G of Vram I can run 7B params local AI in Vram with LM Studio and for ComfyUi it's night and day but moving from a 5700XT to a 6800XT I'm not sure the difference will be as big as the gap between a 2060 and a 7900XT. That's a 2 or 3 generation gap for me.
Thaaanks a lot for your video! After I spend about 24h bricking everything I finally stumbled across your channel! You helped me get my SD to run so much better than before! I'am looking forwared to your next video with some more SD otpmizations for windows users :) Up to that point? Is there a paypal or something where I can buy you a coffee? You safed me from insanity!
You are the best, sir. I have been struggling with getting my 7900 XTX to work with anything. Only one I got to work with Windows was Amuse, which is very lackluster, and it seems like it's dead at this point, and SD.Next with a workaround, which is not great. With the workaround it lacks the ability to run bigger batches, upscaling, inpainting, the pics look choppy, and a lot more. Not great, tbh. And with Linux, that was just a mess. Most wont open, and the few that works will only work of my cpu. But with your help, I can finally generate pictures with all the features. All hail the king!
Excuse my language... HOLY SHIT, This is good. I gave up on Windows & been on Linux for a while but now after testing ths on Windows... oooh i love u. I can finally utilize my 7900 XT to its potential. Thank you for the easy tutorial
I know right? It’s sooooo good! While it isn’t perfect. And I still want full rocm on windows. This is in my opinion a very reasonable not quite full rocm alternative finally!
Having to juggle between windows for gaming and Linux for AI was frustrating, but this just so fast, even more than when I was on Linux. Thanx for the work, as I'm sure I'm saying on behalf of the whole AMD community :)
thanks you so much for this tutorial. this worked for me and i have an unsupported 6700xt. first time i got inpaints and sdxl working properly. you do a good job explaining things but the best is how u put the links to everything in the description. makes my life so much easier.
I also have a 6700xt and am struggling to make it work, would you be able to share any issues you had while setting this up and how you overcame them please? :)
[WinError 126] The specified module could not be found. Error loading "C:\Users\___\ZLUDA\stable-diffusion-webui-directml\venv\lib\site-packages\torch\lib\cublas64_11.dll" or one of its dependencies. please help
Hi your video was really helpful some months ago, but it seems that one update changed something and now there are some errors, do you know what causes: "OSError: none is not a local folder and is not a valid model listed on 'huggingface models' if this is a private repository make sure to pass a token having permission to this repo either by logging or by passing 'token=' And Failed to create a model quickly; will retry using slow method. Those are the errors I'm getting, I hope you know how can I solve them 🙏.
Thank you so much for this. 7900xtx is finally worth it. I had to disable intigrated graphics in bios to get this to work. Excellent instructional video. This shit is crazy.
Ty so much for putting time and effort to help random people figure these things. Almost every single one of your recent video helped me navigate this new world of technology and i wouldnt even be able to try it without you. Ty infinitly
Thank you for this. Using onnx and olive was kind of great, getting faster it/s but not being able to use loras and converting models made it not that enjoyable, but it was still good to learn and practice with.
I just got a bargain rx6800 as I heard that you can do the AI stuff without mortguaging your house to Nvidia, and rx6800 is only 20% slower than an rtx3090 and a new one is half the price of a used 3090 so eh, so here I am trying to get it to work....thanks for your videos....good work! ;D
Did you get this to work with the latest SD Direct ML? This tutorial as is is too old and I am getting errors.
8 หลายเดือนก่อน +2
Gracias , hasta ahora encuentro un tutorial funcional, funcionando con una RX6650XT . Saludos en español comprendo el ingles pero no tengo buena dicción. Gracias
the last time i followed your comfyui + windows with directml guide, it worked like a charm for my rx6600 for sd15. wondered if this is any faster. got myself a 4070s now tho 😁
I believe this should be a decent bit faster than just directml -- if I am remembering correctly, this might be about double the performance of directml alone.
I appreciate your content. Its so frustrating that it cant just work for AMD on windows like it does for Nvidia cards. Im hoping that'll change in the near future but for now I use my 3070 for SD and my 7800XT for gaming and I'm good with that
Man, this is mind boggling. I've been running SD for over a year now with a 6700XT, and I've been kicking myself for picking AMD over NVIDIA on my last upgrade. This is a game changer. Even getting something like ~3.00it/s is so much faster than I was getting before. And I'm getting hi-res fix running, which I could barely do before. This is awesome!
how did you get it working on a 6700xt? HIP SDK is not compatible with the 6700xt (according to amd) and i get an error 215 every time i try to install it. Are you using CPU only or something else?
@@sei_asagiri I was able to get the SDK installation to complete. Then I replaced the library files with the alternate library files provided at the link at the bottom of the video description. If you're getting an error when installing the SDK, I'm not sure what the cause would be.
@@Sbill. I'm going to purchase a nvidia gpu to replace my amd gpu instead. amd feels like its only exclusively designed for linux people while nvidia is exclusively designed for windows people.
as a 7900 xtx owner i was getting so mad that i couldn't do any proper AI generation, bless you for your tutorials man. You are amazing, the true hero of AMD. but you should consider telling people about Compatibility with other models on the beginning of the video to be honest.
Thanks for your tutorials, they are really well explained. For others like me who have an old config: I tried, even though I knew very well that my gpu wasn't on the list. If you get this message: "rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU" it's dead!
I knew my RX580 wasn't anywhere on the list, but it's 8GB VRAM, so I tried it anyway, and it works! Had to replace those library files (third option), put in a couple of ARGS in user.bat (--use-zluda and --no-half), but that got it working. Only issue is how long the image generation takes, which is like 10-15 minutes. I know it's running on the GPU instead of the CPU, because I can hear the GPU's fans working harder, but is there a good way to speed it up, without breaking it?
@@udinmoklet sound okeay! I am trying to install it for my 6700xt aswell but de hip sdk isn't working for my card, im now trying an other version. 5.5.1
RX 7900 XTX I followed step by step, but getting this error: “rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1036”
Finally one that worked. My iterations/ seconds increased about 500% in speed. Thank you!! I do have a question, does soft inpainting work with this implementation of SD? I have not had success running soft inpainting. Thanks
Hey, thanks for the ongoing amazing videos, worked like a charm the first time, but after the 2nd try I get the skip torch cuda error ; adding the --skip-torch-cuda only results in an error several people in the comments are expieriencing. EDIT: Found the solution, had to open cmd in the zluda dir then navigate to the folder with the webui.bat and start it like in the video!
hi, I have this error after launching webui.bat to install everything: rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1036 rocBLAS error: Could not initialize Tensile host: regex_error(error_backref): The expression contained an invalid back reference. Press any key to continue . . . Any idea what to do? Thanks for your help
I can tell you something is wrong. See how slashes go from back slashes to forward slashes? And at one spot there is a backslash next to a forward slash? Look at your env variables and check to see if something is weird.
can you please provide exact version numbers for both zluda and stable-diffusion-webui-directml? Newer versions of both have been released and I'm getting errors when I try to run webui.bat at the end of the installation process. I assume this is because I'm using incompatible versions of different packages? Thank you!
Noice, works. The only thing different I did from this video is downloaded the latest version of zluda. It's slow though on RX 6700 XT 12GB. I guess my card isn't as good as I thought it was. At least it freaking works.
I did change to the latest version. Overall I did not honestly see any noticeable difference. But for some it might provide a more noticeable change? Or perhaps it supports more cuda functions?
@@FE-Engineer Yes, it's about the functions. Everything I would like appears to work, as previously it would just break. And there definitely is a performance increase.
Great tutorial! P.S For my 7900XTX perfect argumatent for SDXL, with minimum crashes(one out of ~500 generations) for 1024x1024 is: set COMMANDLINE_ARGS=--use-zluda --disable-nan-check --no-half-vae set PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:512
Hey, I'm on the step where you type in webui.bat. When I type it in it says 'webui.bat' is not recognized as an internal or external command, operable program or batch file.
In case this helps anyone else... I ended up having to redo my installation. The second time I installed, I copied in AlbedoBase XL and used it for the first inference. I worked immediately without the 30 minutes of doing nothing that I got with the default model. Anyway, good luck out there everyone :)
Best tutorial, this worked for me. Too bad the rx6800 doesn't have the "ai matrix" improvements the rdna3 have, so for that same test prompt I only got around 2.6it/sec... Also... it is just an impression or it is more vram--hungry than running on nvidia hardware?
Hi, thanks for the tutorial! I did everything as you said but I'm getting an error "launch.py: error: unrecognized arguments: --use-zluda". My GPU is RX 7800 XT
What version of PyTorch are you using? I saw 2.2.0 on the screen in passing, but is +cu also included? The reason I'm asking is that I'm getting SD to run fine, with gpu recognized, but when I attempt to load a model I get an error: 20:14:51-079163 ERROR Diffusers failed loading: model=D:\stablediffusion\SDNext\automatic\models\Stable-diffusion\dreamshaper_8.safetensors pipeline=Autodetect/NoneType Building PyTorch extensions using ROCm and Windows is not supported. 20:14:51-083150 ERROR loading model=D:\stablediffusion\SDNext\automatic\models\Stable-diffusion\dreamshaper_8.safetensors pipeline=Autodetect/NoneType: OSError ┌───────────────────────────────────────── Traceback (most recent call last) I'm currently using PyTorch 2.3.0+cu118 (I'm currently using the Vladmantic folk, but this also occurs on my ishggytiger fork as well.)
Thank you soo much for this. I found it really helpful, especially considering that I am running on one of the RX 6XXX GPU's. In the end the only thing I found on stack overflow to get over the issue was to delete the venv folder and then run the webui-user.bat file. But after a reboot, it seems to be outputting black or white solid images again. Even after deleting that folder again. Could you think why this might be?
getting Exception Code: 0xC0000005 with a 6700xt on ROCm 6.1, any tips on what's the issue? Used the python ver you said to use and tried 3.10.11 and still lacking any changes
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. Any clues? Do I need to install another version of pytorch? If so, how? Thanks in advance!
update* I am getting 10-57s/it using rx7800xt text to image using Stable Cascade + Zluda with over a dozen browser tabs open i created a 3840 x 2160 image, I'm using as my wallpaper with the highest around 36s/it
Thank you for the video, took me a while to figure it out, but I finally managed to get a decent generation improvement on my setup - to about 11 it/s in SD1.5 on 7900XTX. If others read this, try out the "--use-zluda" flag in stable-diffusion-webui-directml and SD.next do the patching for you and install the correct torch version - much easier this way.
@@matthewfuller9760 you multiply the it/s to the iteration count. That gives 2s for 20it of SD1.5 512x512 or 12s for SDXL base at 25 its 1024x1024. More if you swap models, i.e. if you run an SDXL refiner, but AFAIK that mostly depends on your SSD speed.
Thanks for the video! Got it up and running with barely any issues. Only question I have is about model training. Is that feature still only an Nvidia thing? When I try to create an embedding it seems to create one but the dropdown in the training tab doesn't seem to function. I also see errors referring to embeddings in command prompt.
That is possible. Zluda seems to correctly translate a decent amount of cuda functionality correctly and accurately. But even as a translation layer it is definitely not a 1:1 map of all cuda functionality. It is more than reasonable to assume as you get into more complex cuda functionality that the translation layer may not function properly or accurately. Both would result in failures or crashes. You might try dreambooth. I have not tested. It might work? Likely you will run into the same or even more issues though. Hard to say without trying.
Can u help??? What is the problem? I have an RX6750 XT, installed libraries, tried different ways, the error does not go away. Either the Stable Diffusion defines the graphics card on the gfx90c architecture "RuntimeError: invalid argument to reset_peak_memory_stats"
hey man thanks for all the help u've delivered. One thing, are you planing to use LM studio efficiently using ZLUDA to run LLMs locally? That would be great as when I was using linux, it worked fine but now I'm on windows and it doesn't work.
I was not planning on it. But you are the second person to ask. I will spend some time on it over the weekend and see if I can get it running properly. No promises, it may be using cuda functions that are not supported through zluda.
As another semi related side note. Supposedly amd has a build that is supposed to work with hip sdk I think for lm studio. I had no luck with getting that to work though. :-/
I just want to say thanks, it seems to be working on my 7900 XTX, I'm just wondering do you think we can use this in InvokeAI, I kind of like the layout of it and would love to use it on my AMD GPU. When you get the chance let me know if you think its possible.
I can pretty definitively say for right now...on windows...I doubt you will get it to run with zluda. I spent multiple hours. cudnn is heavily used in here, and while it may entirely be possible, I have not figured out a good way to disable it entirely, and get it running, it is close, I just can not entirely get cudnn disabled, and it seems to be very woven into this program overall.
Hi, Great guide, - When I run the Webui --use-zluda at the very last step, I get "return torch._C._cuda_memoryStats(device)" - "RuntimeError: invalid argument to memory_allocated" anyidea, it loads the Ui, but of course any attempt to run anything fails. I'm on a 7900XTX
I never got into SD or FLux so I'm not going to keep up with what is automatic1111 or what is needed if I want to make images, upscale the res, do SD video, and basically everything.
How fast is this compared to Olive approach? Zluda is not officially supported by AMD but they have partnered with Microsoft for Olive and other improvements.
Will take a look. I have been moving across the country and dealing with some family issues but I am looking for some new things to do so I will put it on my list.
Some questions as i'm seeing your tutorial at the moment: Can you install the HIP drivers alongside the normal AMD drivers for windows, or do you have to choose between one or the other? Regarding ZLUDA, in your video you downloaded version v3.1, however the most recent version is v3.5. Is there any reason for that or does it not matter which version you download?
It seems i'm running into an error when running webui.bat. I've successfully installed python 3.10.6 (added to path), git, HIP, added ZLUDA to path, etc. But when i run webui.bat i get this: rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 rocBLAS error: Could not initialize Tensile host: regex_error(error_backref): The expression contained an invalid back reference. Press any key to continue . . .
@@FE-Engineer right, i didn't notice my card had one tick and one X. Thanks for that! Do you always have to wait so long for the first gen when starting the webui or is it only the first time you do it? Also, is this compatible with something like forge? Or are the libraries not compatible? Regardless, thanks for yet another basically groundbreaking tutorial! It's really not that nice having to dualboot into an OS you barely have any experience with just for one usecase.
Thank you for your effort for us Amd people 😁 Can I just add Zluda to my currrent A1111 installation? I use directml for months, everything works, slow but works. Is there a chance to break something?
I'm very surface level with my understanding, does this remove the option for ONNX? I'm happy with the speeds but am curious about if its possible to optimize these models further for AMD? I've installed the ReActor extension which seems to want to call for the onnxruntime-gpu. Everything functions as it should (including ReActor) but I'm curious about if there is a way to increase speed further?
Onnx has significant drawbacks. Zluda to my knowledge likely will not apply or work for onnx format. I don’t believe that onnx is necessarily unavailable it likely will not use zluda though. The onnx drawbacks were that speed is a bit better. Inpainting will not work. Sdxl will not work. And you may have to convert models which can be a little time consuming and has its own issues sometimes.
Should be this weekend. Might have two. One for a semi updated guide for this one. It’s not really different just shorter since it now helps you to get the files setup properly. Probably also one on sd.next. And I might do one on comfyui. But that is still weird and very manual I believe. :-/
Performance better, worse or equal to Linux ROCm?
about 20% - 25% worse than ROCm on linux I would say...but has all the normal features of automatic without any ONNX or Olive stuff that were very irritating.
@@FE-Engineer Ig its better than having to the all the optimization stuff over and over again, also likely a lot less space is consumed compared to duel booting linux
Yes. If you hate the idea of dual booting Linux. Or have other reasons why Linux ROCm is not an option. This is a reasonable work around.
@@FE-EngineerI am only getting 1-2it/s on 6900xt with zluda. What is wrong
@@FE-Engineer I used ventoy to run linux and i don't see where people say its easier to install and use, i had a hard time trying to get SD to even run.
I just bought a new card and once again I am back to your tutorials. Your videos helped me before, your tutorials are extremely up to date and easy to follow. Thanks man, you're doing a great job here!
Whahoo! Glad it worked and went smoothly! :). Thanks for watching!
I “sidegraded” from an RTX 3070 to an RX 6800. Mainly did it because I wanted that extra VRAM and I found a really good deal. Thank you for this tutorial! Very well put together
Me too, just found a bargain rx6800, this is my best ever card and apart from the bells and whistles this card punches well above it's weight....
THANKS A LOT MATE! This is so awesome. I have played with directML and its settings before like hell. My command webui-user.bat argument lines were almost one A4 page.
i noticed that you need to restart your PC to get new PATH directions to work on WIN11. Without restart you end up getting "failed to load zluda path automatically" and "use skip-cuda-torch-test" info. Also first install will download cublas64_12 and cusparse64_12 instead of 64_11 without using --use-zluda argument with user.bat.Idk why.
My 6750XT results:
1.
1.5 SD models: txt2img 1024x1024: 3.75s/it /average and 1:05min generation time.
SDXL models: txt2img 1024x1024: 3.50s/it average and 1:10 minutes.
NOTE: without zluda this was impossible task because instant memory error. and SDXL models generated over 2 minutes with 512x512 resolution.
2. Memoryusage is now calibrated. With zluda SD using only 10.2gb/12Gb memory and it will free up memory after generation. 15min 1024x1024 -> 2048 upscaling did not encounter memory error. With directML you cant use more than 1.5x upscale and controlnet. No you dont need a control net with zluda. This is awesome.
3. ControlNet works just fine
4. Ultimate Upscaler works normally
5. Inpaint works normally
AMD pro drivers are slight faster than adrealine version. There is sligh 5-15s delay with adrealine when press "generate" and no delay with ProDrivers. IDK what cause this.
How it worked for you.. i've got
Cannot read C:\Program Files\AMD\ROCm\6.1\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
rx6700xt, with no working workaround on that shlt
@@SanyaWoFloride-k5u you need ROCm 5.7.1 and change files in \ROCM\5.7\
Very nice tutorial. I've moved onto the NVidia side, but your tutorials were extremely helpful with setting up SD with Olive when I was still using my RX 7800XT.
Thank you for making it easier for AMD users stuck in windows who are curious about trying SD.
Thank you :)
Very nice, because it doesn’t work?
This goes back and forth. About a year ago price / performance was on the side of amd mostly but due to continued improvements now nvidia likely has an edge if you can get a good price for like a 3080 or even maybe a 4070 super.
With AMD. Yes. Linux will give you better performance 99% of the time because full ROCm.
OMG! I can't believe this worked! I'm running this on a 7800XT with no issues.
One thing to note though, this only worked with Python Version 3.10.6.
And also, for anyone not following FE-Engineers file location and structure, you can run CMD from the address of the file explorer window, just navigate there and type in "cmd" in the address bar and command prompt will open at that directory, made things a bit easier for me.
You are the only person who have workable SD XL AMD guide , also whole other stuff like torch, torch-cu, tensor work well, and this rare
great stuff! Great tutorial as always, thank you.
Thank you so much for watching :)
This was a lifesaver! Fiddled 2 days to get Olive ONNX etc working at at least useable level, and after installing zluda using this tutorial (almost) all works out of box without constant tinkering.
Inpaint sketch does not work proper (renders whole image instead mask area) but it is prob -directml fork issue
Overall if users don’t want to go Linux and for real rocm. And until complete rocm is on windows. I think zluda is an excellent compromise that still provides tons of functionality for folks in windows. Thanks for watching!
Thank you so much for this tutorial! Haven't even finished the entire video yet but already started generating, even without replacing the files (which I did anyways, didn't seem to affect speed). Getting around 20-25 it/s which seems great! 7900XTX sure is a beast of a card!
Yea they changed some things to make it a lot easier.
Finally, after days of trying, i found your video...really good explanation, and i was finally able to make it run
I’m glad it helped! :) thank you for watching!
I was really getting frustrated with all that shit.. Thank you so much for this video! Finally I can use SD properly again 🙏
I started with one of your other videos, but this one got me officially up and running (on a 6600XT). Much appreciated!
Glad it helped and worked without issue (hopefully). :) thank you for watching!
What is the generation rate of it/s with zluda? Is the generation faster than with directml? Interested to know before installing, I have a rx6600.
@@White-yz4kwsame doubt
@@Torva01 sounds like if you've ❌ on HIP SDK it's about 3 times slower than Linux ROCM, atleast according to one test with 6700 XT.
Safe to say it'd be memory efficient regardless and I'm tempted to try on 6700 but I've to check if it's useful for something else like DLSS maybe because that speed gain is not worth it alone.
@@matthewfuller9760 I've tested it. It's about same speed as shark/vulkan but it didn't do much to help VRAM usage. Well, it consumes less than directML but falls apart when you try to upres on sdnext.
Thanks a lot for the tutorial. I could not for the life of me get it to work on Fedora and finally this works really well. I moved from a RTX 2060 to anew 7900XT recently and I was getting 1.5x 2x performance on Comfyui but with this I get at last x5 x6 speed when generating with XL Models.
Hi there, I'm looking at a rx6800 and so just to ask you're quite satisfied with the performance and capabilities of your 7900XT as opposed to the 2060? I have an rx5700 which I am really happy with though for the AI I need more Vram....
@@CapaUno1322 yes definitively. With the 20G of Vram I can run 7B params local AI in Vram with LM Studio and for ComfyUi it's night and day but moving from a 5700XT to a 6800XT I'm not sure the difference will be as big as the gap between a 2060 and a 7900XT. That's a 2 or 3 generation gap for me.
Thaaanks a lot for your video! After I spend about 24h bricking everything I finally stumbled across your channel! You helped me get my SD to run so much better than before! I'am looking forwared to your next video with some more SD otpmizations for windows users :)
Up to that point? Is there a paypal or something where I can buy you a coffee? You safed me from insanity!
You are the best, sir. I have been struggling with getting my 7900 XTX to work with anything. Only one I got to work with Windows was Amuse, which is very lackluster, and it seems like it's dead at this point, and SD.Next with a workaround, which is not great. With the workaround it lacks the ability to run bigger batches, upscaling, inpainting, the pics look choppy, and a lot more. Not great, tbh. And with Linux, that was just a mess. Most wont open, and the few that works will only work of my cpu.
But with your help, I can finally generate pictures with all the features.
All hail the king!
OMG! Thank you so much for this one! I tried for so long to get this running... All the text tutorials were just too complicated.
You are welcome! I’m glad it helped. Thanks for watching!
thank you so much, this was actually not too bad to set up!
Yea, it is not exactly straight forward, but it is not that bad either. Thank you for watching and the kind words :)
It work using a rx 7600 xt, thanks for this amazing tutorial, the only one that really worked for me. Like and sub.
You are very welcome! Thank you for watching!
Such a tutorial has been waiting for a long time. Thank you so much for your service to the Amd community, which is so hated by the AI community
You are welcome. I’m glad finally on windows something with relatively decent performance that seems to not be seriously lacking in something.
Excuse my language... HOLY SHIT, This is good. I gave up on Windows & been on Linux for a while but now after testing ths on Windows... oooh i love u. I can finally utilize my 7900 XT to its potential. Thank you for the easy tutorial
I know right? It’s sooooo good! While it isn’t perfect. And I still want full rocm on windows. This is in my opinion a very reasonable not quite full rocm alternative finally!
Having to juggle between windows for gaming and Linux for AI was frustrating, but this just so fast, even more than when I was on Linux. Thanx for the work, as I'm sure I'm saying on behalf of the whole AMD community :)
can't even imagine how tough was that to work it out. Thanks!
You're a blessing upon this world.
Okay, I finally got it working thank you so much, you’ve earned a sub
Glad it’s up and running! Thank you for watching! :)
@@FE-Engineerkeep up the good work, so much faster now 😌
Got it running atlast all thanks to you!!
Thank you so much.
10 images at 1024x1536 (Hires fix from 512x768) 7900XT
With previous directml: 16min
Now with Zluda: 5min 30s
Whoah. That’s way better! Nice!
Works beautifully, thanks man!
jsut got stable difusion installed, failed cuda test then you poped up, thanks heaps
Very nice work, thanks a lot!
You are welcome! Thanks for watching!
thanks you so much for this tutorial. this worked for me and i have an unsupported 6700xt. first time i got inpaints and sdxl working properly. you do a good job explaining things but the best is how u put the links to everything in the description. makes my life so much easier.
what's your performance like with the 6700xt im curious
My 6600 XT does 2/its it sucks. Shouldn't have cheaped out on a card lol.
I also have a 6700xt and am struggling to make it work, would you be able to share any issues you had while setting this up and how you overcame them please? :)
@@Eminic112 between 1 to 2 iters per sec it depends on the prompt. More tokens takes longer.
@@Jay-js6zr I just followed the guide. Wasn't to hard. Make sure you are following it to the letter.
Thank you, Work perfectly on my Rx6800 so fast. Amazing.
Fantastic! I’m glad to hear that. Thank you for watching :)
[WinError 126] The specified module could not be found. Error loading "C:\Users\___\ZLUDA\stable-diffusion-webui-directml\venv\lib\site-packages\torch\lib\cublas64_11.dll" or one of its dependencies. please help
Just FYI, you can use a Windows variable %userprofile% to provide an actual full ksth that can be zsed in Windows without exposing your username.
Make sure u copied the 2 files he mentioned and renamed them correctly.
@@silvermoonk9121I worked it out, all good 😊
same error here, did anyone solve it?
newer version of zluda has fewer files or am I missing g something?
your the best. hope your family is all good
Thank you so much! Family is getting there. My son has a lot of medical issues. So long road there. But thank you for asking! :)
Hi your video was really helpful some months ago, but it seems that one update changed something and now there are some errors, do you know what causes:
"OSError: none is not a local folder and is not a valid model listed on 'huggingface models' if this is a private repository make sure to pass a token having permission to this repo either by logging or by passing 'token='
And
Failed to create a model quickly; will retry using slow method.
Those are the errors I'm getting, I hope you know how can I solve them 🙏.
Thank you so much for this. 7900xtx is finally worth it. I had to disable intigrated graphics in bios to get this to work. Excellent instructional video. This shit is crazy.
Ah yes. You could likely set it in windows variables I think it is hip gfx visible devices and then set it to 1 but it works disabling bios as well.
Ty so much for putting time and effort to help random people figure these things. Almost every single one of your recent video helped me navigate this new world of technology and i wouldnt even be able to try it without you. Ty infinitly
You are very welcome! I am glad they helped! Thank you for watching!
Thank you for this. Using onnx and olive was kind of great, getting faster it/s but not being able to use loras and converting models made it not that enjoyable, but it was still good to learn and practice with.
I just got a bargain rx6800 as I heard that you can do the AI stuff without mortguaging your house to Nvidia, and rx6800 is only 20% slower than an rtx3090 and a new one is half the price of a used 3090 so eh, so here I am trying to get it to work....thanks for your videos....good work! ;D
Did you get this to work with the latest SD Direct ML? This tutorial as is is too old and I am getting errors.
Gracias , hasta ahora encuentro un tutorial funcional, funcionando con una RX6650XT . Saludos en español comprendo el ingles pero no tengo buena dicción. Gracias
Damn you were not lying about that first run. I ended up at nearly 30 minutes
the last time i followed your comfyui + windows with directml guide, it worked like a charm for my rx6600 for sd15. wondered if this is any faster. got myself a 4070s now tho 😁
I believe this should be a decent bit faster than just directml -- if I am remembering correctly, this might be about double the performance of directml alone.
How to uninstall torch-2.2.0+cu121 and install torch-2.2.0+cu112 , i think that is my problem because when i run sd i run with cpu
this video is a blessing thank you!!
You are welcome :) thanks for watching!
Excellent work. Thanks!!!
There were a lot of mess errors, but I finally succeeded
Thank you!!
Yea. It’s kind of crazy how many things say error and don’t matter. But it only takes one to wreck everything.
you are the best, man. still works.
I appreciate your content. Its so frustrating that it cant just work for AMD on windows like it does for Nvidia cards. Im hoping that'll change in the near future but for now I use my 3070 for SD and my 7800XT for gaming and I'm good with that
do u know how much it/s i should get with a rx 7800xt if i dont all right?
Man, this is mind boggling. I've been running SD for over a year now with a 6700XT, and I've been kicking myself for picking AMD over NVIDIA on my last upgrade. This is a game changer. Even getting something like ~3.00it/s is so much faster than I was getting before. And I'm getting hi-res fix running, which I could barely do before. This is awesome!
how did you get it working on a 6700xt? HIP SDK is not compatible with the 6700xt (according to amd) and i get an error 215 every time i try to install it. Are you using CPU only or something else?
@@sei_asagiri I was able to get the SDK installation to complete. Then I replaced the library files with the alternate library files provided at the link at the bottom of the video description. If you're getting an error when installing the SDK, I'm not sure what the cause would be.
@@Sbill. I'm going to purchase a nvidia gpu to replace my amd gpu instead. amd feels like its only exclusively designed for linux people while nvidia is exclusively designed for windows people.
as a 7900 xtx owner i was getting so mad that i couldn't do any proper AI generation, bless you for your tutorials man. You are amazing, the true hero of AMD.
but you should consider telling people about Compatibility with other models on the beginning of the video to be honest.
That’s fair. I will try to include something at the beginning about this.
you are a legend
Thanks for your tutorials, they are really well explained.
For others like me who have an old config:
I tried, even though I knew very well that my gpu wasn't on the list. If you get this message: "rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU" it's dead!
What exactly do you mean with its dead? I also get this error even tho I have an RX 7900 xtx which is most definitely completely supported
I knew my RX580 wasn't anywhere on the list, but it's 8GB VRAM, so I tried it anyway, and it works! Had to replace those library files (third option), put in a couple of ARGS in user.bat (--use-zluda and --no-half), but that got it working. Only issue is how long the image generation takes, which is like 10-15 minutes. I know it's running on the GPU instead of the CPU, because I can hear the GPU's fans working harder, but is there a good way to speed it up, without breaking it?
Thank you very much, it generates pictures on AMD 6800 with around 5it/s
How is rocm compared to SD Amd fork, that's been around? Sorry if mybquestion is incompetent.
Thank you so much bro, it's working on RX 6700 XT!
took 23 mins+ on first generation
You are very welcome! Thanks for watching :)
very nicee! can it generate fast now?
@@joris2032 well kinda fast, under 15 seconds maybe? depends on the resolution
@@udinmoklet sound okeay! I am trying to install it for my 6700xt aswell but de hip sdk isn't working for my card, im now trying an other version. 5.5.1
@@joris2032 there's extra steps that you have to do, read the documentation
works perfectly on 6800xt thx
我一直用directml,看这个真的是太棒了,非常感谢作者,我竟然配置成功了!!!
this tutorial is awasome, finals the 7900xtx is usable in a1111, haleluia
Holy shit it actually worked., big thanks man!
😂😂 you are welcome. I’m glad it worked :). Thank you for watching!
It worked! Thanks!
RX 7900 XTX
I followed step by step, but getting this error:
“rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1036”
…its seeing your integrated GPU…
Either disable it. Or put in hip visible devices = 1
@@FE-Engineer Hah, I guessed it before seing the actual answer (gfx1036 is not 7000 series), and it works now. But thank you anyway :)
@@FE-EngineerWhere to put that?
Any ideas how to fix the "Failed to create model quickly; will retry using slow method" ?
Finally one that worked. My iterations/ seconds increased about 500% in speed. Thank you!! I do have a question, does soft inpainting work with this implementation of SD? I have not had success running soft inpainting. Thanks
Hey, thanks for the ongoing amazing videos, worked like a charm the first time, but after the 2nd try I get the skip torch cuda error ; adding the --skip-torch-cuda only results in an error several people in the comments are expieriencing.
EDIT: Found the solution, had to open cmd in the zluda dir then navigate to the folder with the webui.bat and start it like in the video!
For me was that i didin't started zluda.exe or didn't open amd as admin, idk what solved
Thank you so much for this video. I was pulling out my hair trying to get this to work. Went through so many guides, but your worked!
hi, I have this error after launching webui.bat to install everything:
rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1036
rocBLAS error: Could not initialize Tensile host:
regex_error(error_backref): The expression contained an invalid back reference.
Press any key to continue . . .
Any idea what to do? Thanks for your help
same error
hey I just fixed it. disable your integrated gpu in device manager and wait a while as it loads and eventually downloads
@@banned-user thank you, I will try it later. I'm not too sure though how to disable the integrated graphics
Can do it from bios for one.
But you can also set it as an export variable for being used. By rocm and tell it to ignore the igpu
I can tell you something is wrong. See how slashes go from back slashes to forward slashes? And at one spot there is a backslash next to a forward slash? Look at your env variables and check to see if something is weird.
can you please provide exact version numbers for both zluda and stable-diffusion-webui-directml? Newer versions of both have been released and I'm getting errors when I try to run webui.bat at the end of the installation process. I assume this is because I'm using incompatible versions of different packages? Thank you!
Noice, works. The only thing different I did from this video is downloaded the latest version of zluda. It's slow though on RX 6700 XT 12GB. I guess my card isn't as good as I thought it was. At least it freaking works.
I did change to the latest version. Overall I did not honestly see any noticeable difference. But for some it might provide a more noticeable change? Or perhaps it supports more cuda functions?
@@FE-Engineer Yes, it's about the functions. Everything I would like appears to work, as previously it would just break. And there definitely is a performance increase.
Great tutorial!
P.S For my 7900XTX perfect argumatent for SDXL, with minimum crashes(one out of ~500 generations) for 1024x1024 is:
set COMMANDLINE_ARGS=--use-zluda --disable-nan-check --no-half-vae
set PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:512
👍👏
Hey, I'm on the step where you type in webui.bat. When I type it in it says 'webui.bat' is not recognized as an internal or external command,
operable program or batch file.
In case this helps anyone else... I ended up having to redo my installation. The second time I installed, I copied in AlbedoBase XL and used it for the first inference. I worked immediately without the 30 minutes of doing nothing that I got with the default model. Anyway, good luck out there everyone :)
Thank you, easy and good guide
Time to install SD again and try it with my 5700XT :D
Aaaand no, need RX6XXX, linux can override gfx version, but in windows i didn't found how to emulate navi2((
any luck? I got a 5700xt too
Best tutorial, this worked for me. Too bad the rx6800 doesn't have the "ai matrix" improvements the rdna3 have, so for
that same test prompt I only got around 2.6it/sec...
Also... it is just an impression or it is more vram--hungry than running on nvidia hardware?
Hi, thanks for the tutorial! I did everything as you said but I'm getting an error "launch.py: error: unrecognized arguments: --use-zluda". My GPU is RX 7800 XT
Did you add the correct path?
@@kobusdowney5291Yes. BTW I installed SD.Next and ZLUDA works fine, but in A1111 it doesn't for some reason.
What version of PyTorch are you using? I saw 2.2.0 on the screen in passing, but is +cu also included? The reason I'm asking is that I'm getting SD to run fine, with gpu recognized, but when I attempt to load a model I get an error:
20:14:51-079163 ERROR Diffusers failed loading:
model=D:\stablediffusion\SDNext\automatic\models\Stable-diffusion\dreamshaper_8.safetensors
pipeline=Autodetect/NoneType Building PyTorch extensions using ROCm and Windows is not
supported.
20:14:51-083150 ERROR loading
model=D:\stablediffusion\SDNext\automatic\models\Stable-diffusion\dreamshaper_8.safetensors
pipeline=Autodetect/NoneType: OSError
┌───────────────────────────────────────── Traceback (most recent call last)
I'm currently using PyTorch 2.3.0+cu118 (I'm currently using the Vladmantic folk, but this also occurs on my ishggytiger fork as well.)
Thank you very much for the detailed tutorial❤, but I have a little problem which is that the Karras type samplers do not appear. Any solution? 😢
Thank you soo much for this. I found it really helpful, especially considering that I am running on one of the RX 6XXX GPU's. In the end the only thing I found on stack overflow to get over the issue was to delete the venv folder and then run the webui-user.bat file. But after a reboot, it seems to be outputting black or white solid images again. Even after deleting that folder again. Could you think why this might be?
very cool, thank you
Glad you liked it, thank you for watching! :)
getting Exception Code: 0xC0000005 with a 6700xt on ROCm 6.1, any tips on what's the issue? Used the python ver you said to use and tried 3.10.11 and still lacking any changes
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check.
Any clues? Do I need to install another version of pytorch? If so, how? Thanks in advance!
Did you run "./webui.bat --use-zluda" in the terminal
Thanks for the video. How do you start over if you mess up the steps? Is there a way to uninstall every thing and start over?
update* I am getting 10-57s/it using rx7800xt text to image
using Stable Cascade + Zluda with over a dozen browser tabs open i created a 3840 x 2160 image, I'm using as my wallpaper with the highest around 36s/it
It is not using your GPU.
Did you add the -use-zluda flag?
Will take a look into it.
When installing with running user.bat file, it says error 1/2 no space left on device. I have 437GB free space.
Thank you for the video, took me a while to figure it out, but I finally managed to get a decent generation improvement on my setup - to about 11 it/s in SD1.5 on 7900XTX. If others read this, try out the "--use-zluda" flag in stable-diffusion-webui-directml and SD.next do the patching for you and install the correct torch version - much easier this way.
@@matthewfuller9760 you multiply the it/s to the iteration count. That gives 2s for 20it of SD1.5 512x512 or 12s for SDXL base at 25 its 1024x1024. More if you swap models, i.e. if you run an SDXL refiner, but AFAIK that mostly depends on your SSD speed.
how do i install the correct torch version and get it installed into the right folder? complete newbie here and having issues
Thanks for the video! Got it up and running with barely any issues. Only question I have is about model training. Is that feature still only an Nvidia thing? When I try to create an embedding it seems to create one but the dropdown in the training tab doesn't seem to function. I also see errors referring to embeddings in command prompt.
That is possible. Zluda seems to correctly translate a decent amount of cuda functionality correctly and accurately. But even as a translation layer it is definitely not a 1:1 map of all cuda functionality. It is more than reasonable to assume as you get into more complex cuda functionality that the translation layer may not function properly or accurately. Both would result in failures or crashes.
You might try dreambooth. I have not tested. It might work? Likely you will run into the same or even more issues though. Hard to say without trying.
is there any way to do LORA training with this set up at the moment? Or is it only available for hypernetworks built in to Automatic1111?
Can u help???
What is the problem? I have an RX6750 XT, installed libraries, tried different ways, the error does not go away. Either the Stable Diffusion defines the graphics card on the gfx90c architecture
"RuntimeError: invalid argument to reset_peak_memory_stats"
getting same error - did you ever find a solution?
hey man thanks for all the help u've delivered. One thing, are you planing to use LM studio efficiently using ZLUDA to run LLMs locally?
That would be great as when I was using linux, it worked fine but now I'm on windows and it doesn't work.
I was not planning on it. But you are the second person to ask. I will spend some time on it over the weekend and see if I can get it running properly. No promises, it may be using cuda functions that are not supported through zluda.
As another semi related side note. Supposedly amd has a build that is supposed to work with hip sdk I think for lm studio. I had no luck with getting that to work though. :-/
I just want to say thanks, it seems to be working on my 7900 XTX, I'm just wondering do you think we can use this in InvokeAI, I kind of like the layout of it and would love to use it on my AMD GPU. When you get the chance let me know if you think its possible.
I can pretty definitively say for right now...on windows...I doubt you will get it to run with zluda.
I spent multiple hours. cudnn is heavily used in here, and while it may entirely be possible, I have not figured out a good way to disable it entirely, and get it running, it is close, I just can not entirely get cudnn disabled, and it seems to be very woven into this program overall.
@@FE-EngineerI guess we'll either have to wait for zluda support or full rocm support on windows, correct?
That or if the devs decide to allow it and make a flag that disables cudnn.
Hi, Great guide, - When I run the Webui --use-zluda at the very last step, I get "return torch._C._cuda_memoryStats(device)" - "RuntimeError: invalid argument to memory_allocated" anyidea, it loads the Ui, but of course any attempt to run anything fails. I'm on a 7900XTX
where are you when I have amd -.- good job
I never got into SD or FLux so I'm not going to keep up with what is automatic1111 or what is needed if I want to make images, upscale the res, do SD video, and basically everything.
How fast is this compared to Olive approach?
Zluda is not officially supported by AMD but they have partnered with Microsoft for Olive and other improvements.
i went from 14 seconds per iteration to 3 iterations per second on my 7800xt lol thanks
Hi thx! for the tutvideo, can u make a tut for install reforge+reactor or flux using zluda?
Will take a look. I have been moving across the country and dealing with some family issues but I am looking for some new things to do so I will put it on my list.
Some questions as i'm seeing your tutorial at the moment:
Can you install the HIP drivers alongside the normal AMD drivers for windows, or do you have to choose between one or the other?
Regarding ZLUDA, in your video you downloaded version v3.1, however the most recent version is v3.5. Is there any reason for that or does it not matter which version you download?
It seems i'm running into an error when running webui.bat. I've successfully installed python 3.10.6 (added to path), git, HIP, added ZLUDA to path, etc. But when i run webui.bat i get this:
rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
rocBLAS error: Could not initialize Tensile host:
regex_error(error_backref): The expression contained an invalid back reference.
Press any key to continue . . .
@Eminic112 at the end of the video and in the video description it talks about replacing rocblas files for some GPU’s…
@@FE-Engineer right, i didn't notice my card had one tick and one X. Thanks for that!
Do you always have to wait so long for the first gen when starting the webui or is it only the first time you do it?
Also, is this compatible with something like forge? Or are the libraries not compatible?
Regardless, thanks for yet another basically groundbreaking tutorial! It's really not that nice having to dualboot into an OS you barely have any experience with just for one usecase.
@Eminic112 just the very first one.
@@Eminic112 запусти консоль от имени администратора
Thank you for your effort for us Amd people 😁
Can I just add Zluda to my currrent A1111 installation? I use directml for months, everything works, slow but works.
Is there a chance to break something?
I think if you have a version that is up to date then yes, just add the use-zluda flag
Perhaps use --reinstall as well
I'm very surface level with my understanding, does this remove the option for ONNX? I'm happy with the speeds but am curious about if its possible to optimize these models further for AMD? I've installed the ReActor extension which seems to want to call for the onnxruntime-gpu. Everything functions as it should (including ReActor) but I'm curious about if there is a way to increase speed further?
Onnx has significant drawbacks.
Zluda to my knowledge likely will not apply or work for onnx format. I don’t believe that onnx is necessarily unavailable it likely will not use zluda though.
The onnx drawbacks were that speed is a bit better. Inpainting will not work. Sdxl will not work. And you may have to convert models which can be a little time consuming and has its own issues sometimes.
BTW when is your SD next with Zluda video is dropping out? Just curious and waiting for it as I use SD for my social media.
Should be this weekend. Might have two. One for a semi updated guide for this one. It’s not really different just shorter since it now helps you to get the files setup properly. Probably also one on sd.next.
And I might do one on comfyui. But that is still weird and very manual I believe. :-/
@@FE-Engineer yes personally I hate comfy UI, it's complex to work on for me.
I'd be really cool to do a benchmark between an amd card using Zluda (or ROCm) vs nvidia using cuda