March 2024 - Stable Diffusion with AMD on windows -- use zluda ;)
ฝัง
- เผยแพร่เมื่อ 3 มี.ค. 2024
- SD is so much better now using Zluda!
Here is how to run automatic1111 with zluda on windows, and get all the features you were missing before!
** Only GPU's that are fully supported or partially supported with ROCm can run this, check if yours is fully or partially supported before starting! **
check if your gpu is fully supported on windows here:
rocm.docs.amd.com/projects/in...
Links to files and things:
Git for windows: gitforwindows.org/
Python: www.python.org/downloads/
Zluda: github.com/lshqqytiger/ZLUDA/...
AMD HIP SDK: rocm.docs.amd.com/projects/in...
Add PATH for HIP SDK and wherever you copies Zluda files to
%HIP_PATH%bin
C:\path\to\zluda\folder
Start Automatic 1111 webui
webui.bat
copy zluda cublas and cusparse to
...\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\lib
delete cublas64_11 and cusparse64_11
rename zluda files
cublas.dll to cublas64_11.dll
cusparse to cusparse64_11.dll
back in terminal run webui
webui.bat --use-zluda
If you have issues with cudnn
...\stable-diffusion-webui-directml\modules\shared_init.py
Add this after def initialize
torch.backends.cudnn.enabled = False
If you have a GPU that is not fully supported in hip SDK follow these instructions
github.com/vladmandic/automat... - วิทยาศาสตร์และเทคโนโลยี
Performance better, worse or equal to Linux ROCm?
about 20% - 25% worse than ROCm on linux I would say...but has all the normal features of automatic without any ONNX or Olive stuff that were very irritating.
@@FE-Engineer Ig its better than having to the all the optimization stuff over and over again, also likely a lot less space is consumed compared to duel booting linux
Yes. If you hate the idea of dual booting Linux. Or have other reasons why Linux ROCm is not an option. This is a reasonable work around.
@@FE-EngineerI am only getting 1-2it/s on 6900xt with zluda. What is wrong
@@FE-Engineer I used ventoy to run linux and i don't see where people say its easier to install and use, i had a hard time trying to get SD to even run.
Thanks a lot for the tutorial. I could not for the life of me get it to work on Fedora and finally this works really well. I moved from a RTX 2060 to anew 7900XT recently and I was getting 1.5x 2x performance on Comfyui but with this I get at last x5 x6 speed when generating with XL Models.
I just bought a new card and once again I am back to your tutorials. Your videos helped me before, your tutorials are extremely up to date and easy to follow. Thanks man, you're doing a great job here!
Whahoo! Glad it worked and went smoothly! :). Thanks for watching!
can't even imagine how tough was that to work it out. Thanks!
OMG! Thank you so much for this one! I tried for so long to get this running... All the text tutorials were just too complicated.
You are welcome! I’m glad it helped. Thanks for watching!
You're a blessing upon this world.
Thank you so much for this video. I was pulling out my hair trying to get this to work. Went through so many guides, but your worked!
THANKS A LOT MATE! This is so awesome. I have played with directML and its settings before like hell. My command webui-user.bat argument lines were almost one A4 page.
i noticed that you need to restart your PC to get new PATH directions to work on WIN11. Without restart you end up getting "failed to load zluda path automatically" and "use skip-cuda-torch-test" info. Also first install will download cublas64_12 and cusparse64_12 instead of 64_11 without using --use-zluda argument with user.bat.Idk why.
My 6750XT results:
1.
1.5 SD models: txt2img 1024x1024: 3.75s/it /average and 1:05min generation time.
SDXL models: txt2img 1024x1024: 3.50s/it average and 1:10 minutes.
NOTE: without zluda this was impossible task because instant memory error. and SDXL models generated over 2 minutes with 512x512 resolution.
2. Memoryusage is now calibrated. With zluda SD using only 10.2gb/12Gb memory and it will free up memory after generation. 15min 1024x1024 -> 2048 upscaling did not encounter memory error. With directML you cant use more than 1.5x upscale and controlnet. No you dont need a control net with zluda. This is awesome.
3. ControlNet works just fine
4. Ultimate Upscaler works normally
5. Inpaint works normally
AMD pro drivers are slight faster than adrealine version. There is sligh 5-15s delay with adrealine when press "generate" and no delay with ProDrivers. IDK what cause this.
Excellent work. Thanks!!!
You are the best, sir. I have been struggling with getting my 7900 XTX to work with anything. Only one I got to work with Windows was Amuse, which is very lackluster, and it seems like it's dead at this point, and SD.Next with a workaround, which is not great. With the workaround it lacks the ability to run bigger batches, upscaling, inpainting, the pics look choppy, and a lot more. Not great, tbh. And with Linux, that was just a mess. Most wont open, and the few that works will only work of my cpu.
But with your help, I can finally generate pictures with all the features.
All hail the king!
great stuff! Great tutorial as always, thank you.
Thank you so much for watching :)
Ty so much for putting time and effort to help random people figure these things. Almost every single one of your recent video helped me navigate this new world of technology and i wouldnt even be able to try it without you. Ty infinitly
You are very welcome! I am glad they helped! Thank you for watching!
Thaaanks a lot for your video! After I spend about 24h bricking everything I finally stumbled across your channel! You helped me get my SD to run so much better than before! I'am looking forwared to your next video with some more SD otpmizations for windows users :)
Up to that point? Is there a paypal or something where I can buy you a coffee? You safed me from insanity!
This was a lifesaver! Fiddled 2 days to get Olive ONNX etc working at at least useable level, and after installing zluda using this tutorial (almost) all works out of box without constant tinkering.
Inpaint sketch does not work proper (renders whole image instead mask area) but it is prob -directml fork issue
Overall if users don’t want to go Linux and for real rocm. And until complete rocm is on windows. I think zluda is an excellent compromise that still provides tons of functionality for folks in windows. Thanks for watching!
I was really getting frustrated with all that shit.. Thank you so much for this video! Finally I can use SD properly again 🙏
Got it running atlast all thanks to you!!
Very nice tutorial. I've moved onto the NVidia side, but your tutorials were extremely helpful with setting up SD with Olive when I was still using my RX 7800XT.
Thank you for making it easier for AMD users stuck in windows who are curious about trying SD.
Thank you :)
are you seeing better performance when considering price? This is skewed by location of course and the used gpu market. I guess windows is easier. But wont you get better performance with linux?
Very nice, because it doesn’t work?
This goes back and forth. About a year ago price / performance was on the side of amd mostly but due to continued improvements now nvidia likely has an edge if you can get a good price for like a 3080 or even maybe a 4070 super.
With AMD. Yes. Linux will give you better performance 99% of the time because full ROCm.
Very nice work, thanks a lot!
You are welcome! Thanks for watching!
thank you so much, this was actually not too bad to set up!
Yea, it is not exactly straight forward, but it is not that bad either. Thank you for watching and the kind words :)
Thank you for this. Using onnx and olive was kind of great, getting faster it/s but not being able to use loras and converting models made it not that enjoyable, but it was still good to learn and practice with.
Finally, after days of trying, i found your video...really good explanation, and i was finally able to make it run
I’m glad it helped! :) thank you for watching!
Thank you very much, it generates pictures on AMD 6800 with around 5it/s
jsut got stable difusion installed, failed cuda test then you poped up, thanks heaps
You are the only person who have workable SD XL AMD guide , also whole other stuff like torch, torch-cu, tensor work well, and this rare
I started with one of your other videos, but this one got me officially up and running (on a 6600XT). Much appreciated!
Glad it helped and worked without issue (hopefully). :) thank you for watching!
What is the generation rate of it/s with zluda? Is the generation faster than with directml? Interested to know before installing, I have a rx6600.
@@White-yz4kwsame doubt
@@Torva01 sounds like if you've ❌ on HIP SDK it's about 3 times slower than Linux ROCM, atleast according to one test with 6700 XT.
Safe to say it'd be memory efficient regardless and I'm tempted to try on 6700 but I've to check if it's useful for something else like DLSS maybe because that speed gain is not worth it alone.
same gpu. is it too slow to use with expectations of say an image every 1 or 2 minutes?
thanks you so much for this tutorial. this worked for me and i have an unsupported 6700xt. first time i got inpaints and sdxl working properly. you do a good job explaining things but the best is how u put the links to everything in the description. makes my life so much easier.
what's your performance like with the 6700xt im curious
My 6600 XT does 2/its it sucks. Shouldn't have cheaped out on a card lol.
I also have a 6700xt and am struggling to make it work, would you be able to share any issues you had while setting this up and how you overcame them please? :)
@@Eminic112 between 1 to 2 iters per sec it depends on the prompt. More tokens takes longer.
@@Jay-js6zr I just followed the guide. Wasn't to hard. Make sure you are following it to the letter.
Great tutorial!
P.S For my 7900XTX perfect argumatent for SDXL, with minimum crashes(one out of ~500 generations) for 1024x1024 is:
set COMMANDLINE_ARGS=--use-zluda --disable-nan-check --no-half-vae
set PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:512
👍👏
this video is a blessing thank you!!
You are welcome :) thanks for watching!
It worked! Thanks!
the last time i followed your comfyui + windows with directml guide, it worked like a charm for my rx6600 for sd15. wondered if this is any faster. got myself a 4070s now tho 😁
I believe this should be a decent bit faster than just directml -- if I am remembering correctly, this might be about double the performance of directml alone.
Thank you, Work perfectly on my Rx6800 so fast. Amazing.
Fantastic! I’m glad to hear that. Thank you for watching :)
Okay, I finally got it working thank you so much, you’ve earned a sub
Glad it’s up and running! Thank you for watching! :)
@@FE-Engineerkeep up the good work, so much faster now 😌
It work using a rx 7600 xt, thanks for this amazing tutorial, the only one that really worked for me. Like and sub.
You are very welcome! Thank you for watching!
you are a legend
Gracias , hasta ahora encuentro un tutorial funcional, funcionando con una RX6650XT . Saludos en español comprendo el ingles pero no tengo buena dicción. Gracias
Finally one that worked. My iterations/ seconds increased about 500% in speed. Thank you!! I do have a question, does soft inpainting work with this implementation of SD? I have not had success running soft inpainting. Thanks
your the best. hope your family is all good
Thank you so much! Family is getting there. My son has a lot of medical issues. So long road there. But thank you for asking! :)
There were a lot of mess errors, but I finally succeeded
Thank you!!
Yea. It’s kind of crazy how many things say error and don’t matter. But it only takes one to wreck everything.
Thank you, easy and good guide
Thank you for the video, took me a while to figure it out, but I finally managed to get a decent generation improvement on my setup - to about 11 it/s in SD1.5 on 7900XTX. If others read this, try out the "--use-zluda" flag in stable-diffusion-webui-directml and SD.next do the patching for you and install the correct torch version - much easier this way.
how does that figure translate to time? I am guessing around an image every 5 or 10 seconds at lowish resolution?
@@matthewfuller9760 you multiply the it/s to the iteration count. That gives 2s for 20it of SD1.5 512x512 or 12s for SDXL base at 25 its 1024x1024. More if you swap models, i.e. if you run an SDXL refiner, but AFAIK that mostly depends on your SSD speed.
how do i install the correct torch version and get it installed into the right folder? complete newbie here and having issues
我一直用directml,看这个真的是太棒了,非常感谢作者,我竟然配置成功了!!!
Thanks for your tutorials, they are really well explained.
For others like me who have an old config:
I tried, even though I knew very well that my gpu wasn't on the list. If you get this message: "rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU" it's dead!
What exactly do you mean with its dead? I also get this error even tho I have an RX 7900 xtx which is most definitely completely supported
Thank you so much for this. 7900xtx is finally worth it. I had to disable intigrated graphics in bios to get this to work. Excellent instructional video. This shit is crazy.
Ah yes. You could likely set it in windows variables I think it is hip gfx visible devices and then set it to 1 but it works disabling bios as well.
Thank you so much.
10 images at 1024x1536 (Hires fix from 512x768) 7900XT
With previous directml: 16min
Now with Zluda: 5min 30s
Whoah. That’s way better! Nice!
Will take a look into it.
Such a tutorial has been waiting for a long time. Thank you so much for your service to the Amd community, which is so hated by the AI community
You are welcome. I’m glad finally on windows something with relatively decent performance that seems to not be seriously lacking in something.
Excuse my language... HOLY SHIT, This is good. I gave up on Windows & been on Linux for a while but now after testing ths on Windows... oooh i love u. I can finally utilize my 7900 XT to its potential. Thank you for the easy tutorial
I know right? It’s sooooo good! While it isn’t perfect. And I still want full rocm on windows. This is in my opinion a very reasonable not quite full rocm alternative finally!
Having to juggle between windows for gaming and Linux for AI was frustrating, but this just so fast, even more than when I was on Linux. Thanx for the work, as I'm sure I'm saying on behalf of the whole AMD community :)
I appreciate your content. Its so frustrating that it cant just work for AMD on windows like it does for Nvidia cards. Im hoping that'll change in the near future but for now I use my 3070 for SD and my 7800XT for gaming and I'm good with that
do u know how much it/s i should get with a rx 7800xt if i dont all right?
this tutorial is awasome, finals the 7900xtx is usable in a1111, haleluia
Thank you soo much for this. I found it really helpful, especially considering that I am running on one of the RX 6XXX GPU's. In the end the only thing I found on stack overflow to get over the issue was to delete the venv folder and then run the webui-user.bat file. But after a reboot, it seems to be outputting black or white solid images again. Even after deleting that folder again. Could you think why this might be?
Holy shit it actually worked., big thanks man!
😂😂 you are welcome. I’m glad it worked :). Thank you for watching!
Best tutorial, this worked for me. Too bad the rx6800 doesn't have the "ai matrix" improvements the rdna3 have, so for
that same test prompt I only got around 2.6it/sec...
Also... it is just an impression or it is more vram--hungry than running on nvidia hardware?
Damn you were not lying about that first run. I ended up at nearly 30 minutes
Hi, thanks for the tutorial! I did everything as you said but I'm getting an error "launch.py: error: unrecognized arguments: --use-zluda". My GPU is RX 7800 XT
Did you add the correct path?
@@kobusdowney5291Yes. BTW I installed SD.Next and ZLUDA works fine, but in A1111 it doesn't for some reason.
Thanks for the video! Got it up and running with barely any issues. Only question I have is about model training. Is that feature still only an Nvidia thing? When I try to create an embedding it seems to create one but the dropdown in the training tab doesn't seem to function. I also see errors referring to embeddings in command prompt.
That is possible. Zluda seems to correctly translate a decent amount of cuda functionality correctly and accurately. But even as a translation layer it is definitely not a 1:1 map of all cuda functionality. It is more than reasonable to assume as you get into more complex cuda functionality that the translation layer may not function properly or accurately. Both would result in failures or crashes.
You might try dreambooth. I have not tested. It might work? Likely you will run into the same or even more issues though. Hard to say without trying.
for generating pics on amd gpu after all steps in your vid i must use optimised models or not?
im assuming for this to work your gpu compatability must be able to support the runtime and the HIP SDK right?
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check.
Any clues? Do I need to install another version of pytorch? If so, how? Thanks in advance!
Hi. I ran your tutorial through ZLUDA 6700xt. are there any optimizing settings for SD mode of ZLUDA?
I'm very surface level with my understanding, does this remove the option for ONNX? I'm happy with the speeds but am curious about if its possible to optimize these models further for AMD? I've installed the ReActor extension which seems to want to call for the onnxruntime-gpu. Everything functions as it should (including ReActor) but I'm curious about if there is a way to increase speed further?
Onnx has significant drawbacks.
Zluda to my knowledge likely will not apply or work for onnx format. I don’t believe that onnx is necessarily unavailable it likely will not use zluda though.
The onnx drawbacks were that speed is a bit better. Inpainting will not work. Sdxl will not work. And you may have to convert models which can be a little time consuming and has its own issues sometimes.
very cool, thank you
Glad you liked it, thank you for watching! :)
hey, im tryin to add a faceswap extension, but both i tried arent working. Is it because its a fork or the extensions are the issue ? ((the install seems complete, but nothing show up in the UI)ReActor and FaceSwapLab)
This tutorial seems to have started in the middle, what's with the command prompt? How do i get to that bit first?
Noice, works. The only thing different I did from this video is downloaded the latest version of zluda. It's slow though on RX 6700 XT 12GB. I guess my card isn't as good as I thought it was. At least it freaking works.
I did change to the latest version. Overall I did not honestly see any noticeable difference. But for some it might provide a more noticeable change? Or perhaps it supports more cuda functions?
@@FE-Engineer Yes, it's about the functions. Everything I would like appears to work, as previously it would just break. And there definitely is a performance increase.
Any news on ZLUDA support for stable diffusion webui forge? The latest layerdiffusion seems to only support forge.
What version of PyTorch are you using? I saw 2.2.0 on the screen in passing, but is +cu also included? The reason I'm asking is that I'm getting SD to run fine, with gpu recognized, but when I attempt to load a model I get an error:
20:14:51-079163 ERROR Diffusers failed loading:
model=D:\stablediffusion\SDNext\automatic\models\Stable-diffusion\dreamshaper_8.safetensors
pipeline=Autodetect/NoneType Building PyTorch extensions using ROCm and Windows is not
supported.
20:14:51-083150 ERROR loading
model=D:\stablediffusion\SDNext\automatic\models\Stable-diffusion\dreamshaper_8.safetensors
pipeline=Autodetect/NoneType: OSError
┌───────────────────────────────────────── Traceback (most recent call last)
I'm currently using PyTorch 2.3.0+cu118 (I'm currently using the Vladmantic folk, but this also occurs on my ishggytiger fork as well.)
Hey, thanks for the ongoing amazing videos, worked like a charm the first time, but after the 2nd try I get the skip torch cuda error ; adding the --skip-torch-cuda only results in an error several people in the comments are expieriencing.
EDIT: Found the solution, had to open cmd in the zluda dir then navigate to the folder with the webui.bat and start it like in the video!
For me was that i didin't started zluda.exe or didn't open amd as admin, idk what solved
Thank you very much for the detailed tutorial❤, but I have a little problem which is that the Karras type samplers do not appear. Any solution? 😢
Thank you for your effort for us Amd people 😁
Can I just add Zluda to my currrent A1111 installation? I use directml for months, everything works, slow but works.
Is there a chance to break something?
I think if you have a version that is up to date then yes, just add the use-zluda flag
Perhaps use --reinstall as well
where are you when I have amd -.- good job
do you have a planed video about zluda with comfyui or are you able to give me a short rundown on how to get it working i only got directml working with comfyui i wuld be soo happy if you did this :)
can you please provide exact version numbers for both zluda and stable-diffusion-webui-directml? Newer versions of both have been released and I'm getting errors when I try to run webui.bat at the end of the installation process. I assume this is because I'm using incompatible versions of different packages? Thank you!
do upscaler not use zluda? My speed dropped from it/s to s/it when using esrgan 7900xt I mean when high res fix is checked.
I have not tested. It’s possible that specific upscale might not. That one has always been noticeably slower for me as well.
thank you so much
How fast is this compared to Olive approach?
Zluda is not officially supported by AMD but they have partnered with Microsoft for Olive and other improvements.
It works very well for me, thanks friend good job.
Can u help???
What is the problem? I have an RX6750 XT, installed libraries, tried different ways, the error does not go away. Either the Stable Diffusion defines the graphics card on the gfx90c architecture
"RuntimeError: invalid argument to reset_peak_memory_stats"
How to uninstall torch-2.2.0+cu121 and install torch-2.2.0+cu112 , i think that is my problem because when i run sd i run with cpu
hey man thanks for all the help u've delivered. One thing, are you planing to use LM studio efficiently using ZLUDA to run LLMs locally?
That would be great as when I was using linux, it worked fine but now I'm on windows and it doesn't work.
I was not planning on it. But you are the second person to ask. I will spend some time on it over the weekend and see if I can get it running properly. No promises, it may be using cuda functions that are not supported through zluda.
As another semi related side note. Supposedly amd has a build that is supposed to work with hip sdk I think for lm studio. I had no luck with getting that to work though. :-/
Thank you so much bro, it's working on RX 6700 XT!
took 23 mins+ on first generation
You are very welcome! Thanks for watching :)
very nicee! can it generate fast now?
@@joris2032 well kinda fast, under 15 seconds maybe? depends on the resolution
@@udinmoklet sound okeay! I am trying to install it for my 6700xt aswell but de hip sdk isn't working for my card, im now trying an other version. 5.5.1
@@joris2032 there's extra steps that you have to do, read the documentation
hey, two questions, i actually made zluda work like yesterday, but getting same performance that i had with directml (no onnx or olive)
why did you use rocm 6.0? they say to use 5.7, is it actually better?
second, why did you use the old 3.1 zluda model from ishqqytiger and not 3.5 ?
am currently thinking of getting linux installed to try it there, cause i get 0 performance gain
might aswell try doing exactly what you did tho just for testing
watching again you may actually have used the 5.7, not sure now
5.7 is the only one for windows currently. 6.0 is Linux only for now to my knowledge
why dont you have the git for the webui stable diffusion in the description?
Good question. I don’t know. I’ll add it in.
I just want to say thanks, it seems to be working on my 7900 XTX, I'm just wondering do you think we can use this in InvokeAI, I kind of like the layout of it and would love to use it on my AMD GPU. When you get the chance let me know if you think its possible.
I can pretty definitively say for right now...on windows...I doubt you will get it to run with zluda.
I spent multiple hours. cudnn is heavily used in here, and while it may entirely be possible, I have not figured out a good way to disable it entirely, and get it running, it is close, I just can not entirely get cudnn disabled, and it seems to be very woven into this program overall.
@@FE-EngineerI guess we'll either have to wait for zluda support or full rocm support on windows, correct?
That or if the devs decide to allow it and make a flag that disables cudnn.
Hey, I'm trying to get a different program working called Flowframes(just to test cuda rife with zluda). When I open the program with zluda on my 5700xt(hip is installed, env variables added, it wouldn't even run the program through zluda without the env variables) it just says no nvidia gpu is detected(at the top it says 5700xt is detected). I go ahead with a cuda interpolation task anyway and when it comes time for interpolation to start it says no cuda gpu detected falling back to cpu(and it indeed falls back to cpu). Sorry this isn't really on topic to your video, I'm just not seeing any other videos demonstrating zluda with ai stuff. Thanks in advance if you have any ideas
Is the 5700xt supported by hip sdk?
@@FE-Engineer ahh my bad.. I didn't think 5700xt wouldn't be supported. Man, between this and AFMF not coming RDNA1 is effectively discontinued for new features already.
Thanks again for the tutorial!
Is there any way to get extensions such as controlnet to work using zluda?
It works.
I'm having an issue during installing. I'm using Automatic1111 for DirectML, the latest version. When running the webui.bat it errors out with RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select). I'm using XFX 7900XTX and Ryzen 7950XT.
i have a problem here. This method works fine with me UNTILL i restart my PC. And then the good old error "torch can't use this GPU" pop up again. I need to delete all the setup and start fresh again like this in order to use SD. Any solutions ?
hi, I have this error after launching webui.bat to install everything:
rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1036
rocBLAS error: Could not initialize Tensile host:
regex_error(error_backref): The expression contained an invalid back reference.
Press any key to continue . . .
Any idea what to do? Thanks for your help
same error
hey I just fixed it. disable your integrated gpu in device manager and wait a while as it loads and eventually downloads
@@banned-user thank you, I will try it later. I'm not too sure though how to disable the integrated graphics
Can do it from bios for one.
But you can also set it as an export variable for being used. By rocm and tell it to ignore the igpu
I can tell you something is wrong. See how slashes go from back slashes to forward slashes? And at one spot there is a backslash next to a forward slash? Look at your env variables and check to see if something is weird.
Great tutorial! Thank you! Obviously my RX570 8GB is not supported at all :(
thank you sir
BTW when is your SD next with Zluda video is dropping out? Just curious and waiting for it as I use SD for my social media.
Should be this weekend. Might have two. One for a semi updated guide for this one. It’s not really different just shorter since it now helps you to get the files setup properly. Probably also one on sd.next.
And I might do one on comfyui. But that is still weird and very manual I believe. :-/
@@FE-Engineer yes personally I hate comfy UI, it's complex to work on for me.
as a 7900 xtx owner i was getting so mad that i couldn't do any proper AI generation, bless you for your tutorials man. You are amazing, the true hero of AMD.
but you should consider telling people about Compatibility with other models on the beginning of the video to be honest.
That’s fair. I will try to include something at the beginning about this.
When installing with running user.bat file, it says error 1/2 no space left on device. I have 437GB free space.
Inpainting on windows with amd is crazy, can you train loras too?
Thanks for the video.
I'm able to get the webUI running, however, I get "RuntimeError: The NVIDIA driver on your system is too old (found version 8000)" when trying to generate image.
Tried reinstalling HIP-SDK and python 3.10.6, didn't work.
Any idea what could be the problem? Thank you very much.
That is a really odd error. No one else that I have seen has gotten that error.
Unfortunately I don’t know how to help here. Are you sure you got the zluda files, copied them over. Renamed them. And added everything to ENV properly?
Try rebooting your computer if you have not.
@@FE-EngineerYes I'm pretty sure I followed every step as instructed and rebooted several times after adding those 2 ENV variables.
Also, I had to add --skip-torch-cuda-test to even get the webui started, which is quite different from your video. Weird...
Guess I'll have to stick to dual booting Linux for now.
Nonetheless thank you very much for replying.
Hey, I'm on the step where you type in webui.bat. When I type it in it says 'webui.bat' is not recognized as an internal or external command,
operable program or batch file.
Some questions as i'm seeing your tutorial at the moment:
Can you install the HIP drivers alongside the normal AMD drivers for windows, or do you have to choose between one or the other?
Regarding ZLUDA, in your video you downloaded version v3.1, however the most recent version is v3.5. Is there any reason for that or does it not matter which version you download?
It seems i'm running into an error when running webui.bat. I've successfully installed python 3.10.6 (added to path), git, HIP, added ZLUDA to path, etc. But when i run webui.bat i get this:
rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
rocBLAS error: Could not initialize Tensile host:
regex_error(error_backref): The expression contained an invalid back reference.
Press any key to continue . . .
@Eminic112 at the end of the video and in the video description it talks about replacing rocblas files for some GPU’s…
@@FE-Engineer right, i didn't notice my card had one tick and one X. Thanks for that!
Do you always have to wait so long for the first gen when starting the webui or is it only the first time you do it?
Also, is this compatible with something like forge? Or are the libraries not compatible?
Regardless, thanks for yet another basically groundbreaking tutorial! It's really not that nice having to dualboot into an OS you barely have any experience with just for one usecase.
@Eminic112 just the very first one.
@@Eminic112 запусти консоль от имени администратора
Does the SnapCam program run smoothly if I do this? There are some glitches in some filters, things that Nvidia manages to make work perfectly.
Zluda is a reasonably good translation. But it is definitely not perfect. So it is entirely possible there are some spots where it might be spotty with coverage.
thank you!
You're welcome!
Hi, I did everything the same as you, but the CPU still handles the generation of images. the speed is extremely low, the GPU is not used (in my pc rx6800)