Finally this is a manual to get SD running on my 7900XT. It seems as if if builds parts of the software during installation process. Great, so it is compatible to "my" hardware. Also the error message are understandable. The speed on a 512x512 pic (stable diffusion 1.5 model) on the 7900XT is between 13 and 15 it/s. Thank you very much. I've had an enjoyable afternoon yesterday - and it will not be the last one.
I have done every step for my card that is an rx6600 yet i still can't get it to work sadly. OSError: Building PyTorch extensions using ROCm and Windows is not supported is the error i get from it, idk why the fix doesn't seem to work
I have rx6600 too and it works. But it's super slow, i don't know if I'm doing something wrong or we have to wait longer for ours to be HIP SDK supported too. But for now, DirectML cpu will have to do...
RTX 3080 ti is much faster in terms of it/s (even if it has 12GB vs 16GB of the 7800 XT) and more compatible than any AMD card with AI tools due to CUDA
Sadly can't run on my 5700 series. I've try the tutorial on vlad github it said running on zluda but the render time still the same with DirectML. Maybe it's actually running or not, or maybe the model is the problem or what, I still didn't know. XD ATM Using RocM 5.7 with ROCmLibs. In the process of this thinkering, I found each WebUI have different style characteristic of render, even though they use the same architecture, either DirectML or ZLUDA the images is slightly different from each WebUI. Since I mainly use "stable-diffusion-webui-amdgpu" and trying to use ZLUDA there it still give some error in the installation, that's why I try the SDNext it's seems works but the render time still the same, it got me confuse. Maybe I'll try to run it with various different model next time.
Have you downloaded an SD model (from for example civit ai) and put it in the SD model folder? On the dropdown in the top left there should be a list of all the models that are loaded. If it's empty, you either haven't downloaded an SD model or it's in the wrong folder (see my prev video on SD, I mention the location for models there)
How many it/s can you get with the RX 6800 non-XT (at default 512 x 512)? I could get the RTX 2080 Ti or 3070 for the same price where I live, but I'd lose a lot of gaming performance and longevity due to the much lower VRAM.
You just gotta make a judgement call. If AI is your main priority, I would go for nvidia (TBH I regret getting an AMD card now), it/s is gonna be significantly faster than a 6800 and less compatibility issues (unless you don't mind using linux with AMD cards and fiddling with errors until you get it to work). If your main priority is gaming with a bit of AI on the side, AMD is fine for that.
The version that was released when AMD stopped developing it has been forked so that others can continue without starting from scratch. That's how automatic1111 works with zluda now.
I'll have to give this a try this evening with my RX7600 (although a different process since I use Fooocus). I am tired of the 5-7 s/it compared to my other PC with RTX3050 that gets around 1 s/it. Luckily I have a 12GB RTX2060 on the way that should speed it up into the it/s range.
not long ago I tried directML with RX580 and i got about 4s per 1 instruction. It was slow, but worked. Dont know if zluda was supported, but if you are serious about generative AI, upgrade your GPU, its worth it. The newer cards are way more optimized and way more powerful. With 7800XT I now get about 4 instructions per second, so its waaaay faster.
@@DouglasRivitti i would like to help you but everybody has different errors. Radeon software up to date, windows, install all requirements from the txt file... it "should" run. There was some workaround for older cards for the ROCm! be sure to have it, not sure you can run RX580 with Zluda and ROCm... Try that directML version. Its slower, but worked for me.
I'm not sure, it might. For that follow the steps in the section 'Replace HIP SDK library files for unsupported GPU architectures.' of the guide linked in description (specifically the last bullet point of step 3 of that section, the one who links to 'ROCmLibs.7z'). Good luck!
This actually doubled my generation speed. abosuletly lovely
What GPU
@ESGamingCentral 6700xt
This tutorial does not work anymore. The installation guide has changed and I just get errors when installing.
Finally this is a manual to get SD running on my 7900XT. It seems as if if builds parts of the software during installation process. Great, so it is compatible to "my" hardware. Also the error message are understandable. The speed on a 512x512 pic (stable diffusion 1.5 model) on the 7900XT is between 13 and 15 it/s. Thank you very much. I've had an enjoyable afternoon yesterday - and it will not be the last one.
you think its possible to use deepspeed with zluda, with things such as RVC?
all faces are messed up or it just keeps creating black screens
I have done every step for my card that is an rx6600 yet i still can't get it to work sadly.
OSError: Building PyTorch extensions using ROCm and Windows is not supported is the error i get from it, idk why the fix doesn't seem to work
I have rx6600 too and it works.
But it's super slow, i don't know if I'm doing something wrong or we have to wait longer for ours to be HIP SDK supported too. But for now, DirectML cpu will have to do...
Hi. Does it make sense to get a rx 7800xt today or is it better to get a rtx 3080 ti. For training ii and image generation. Thanks in advance.
RTX 3080 ti is much faster in terms of it/s (even if it has 12GB vs 16GB of the 7800 XT) and more compatible than any AMD card with AI tools due to CUDA
@@northbound6937 thank you
how can i install hip sdk to another folder? i dont have enough space in my disk
There's an install location option according to the guide rocm.docs.amd.com/projects/install-on-windows/en/docs-6.0.2/index.html
Finally got it working but now I can't use safetensors? (Edit) got it all working
Sadly can't run on my 5700 series.
I've try the tutorial on vlad github it said running on zluda but the render time still the same with DirectML.
Maybe it's actually running or not, or maybe the model is the problem or what, I still didn't know. XD
ATM Using RocM 5.7 with ROCmLibs.
In the process of this thinkering, I found each WebUI have different style characteristic of render, even though they use the same architecture, either DirectML or ZLUDA the images is slightly different from each WebUI.
Since I mainly use "stable-diffusion-webui-amdgpu" and trying to use ZLUDA there it still give some error in the installation, that's why I try the SDNext it's seems works but the render time still the same, it got me confuse.
Maybe I'll try to run it with various different model next time.
Dont work on RX6600 😪
It does, you have to do an extra step installing HIP SDK and replacing Library files
@@punkplayzz Yea thanks. Its work
Where can I upscale generated images? I can't find this.
In the extras tab if I recall correctly
I tried it but when i want to make something it says model not loaded ?
Have you downloaded an SD model (from for example civit ai) and put it in the SD model folder? On the dropdown in the top left there should be a list of all the models that are loaded. If it's empty, you either haven't downloaded an SD model or it's in the wrong folder (see my prev video on SD, I mention the location for models there)
@@northbound6937 OSError: Building PyTorch extensions using ROCm and Windows is not supported.
17:50:16-045137 DEBUG Script callback init time: image_browser.py:ui_tabs=0.46 system-info.py:app_started=0.25
task_scheduler.py:app_started=0.12
17:50:16-046137 INFO Startup time: 9.98 torch=3.48 gradio=0.67 libraries=1.94 extensions=0.64 face-restore=0.05
ui-en=0.13 ui-img2img=0.05 ui-control=0.08 ui-settings=0.17 ui-extensions=0.87 ui-defaults=0.06
launch=0.12 api=0.07 app-started=0.37 checkpoint=1.06
17:50:16-048139 DEBUG Save: file="config.json" json=29 bytes=1184 time=0.003
17:50:16-050141 INFO Launching browser
17:50:19-143698 INFO MOTD: N/A
17:50:23-884855 DEBUG Themes: builtin=12 gradio=5 huggingface=0
17:50:26-962581 INFO Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64;
rv:126.0) Gecko/20100101 Firefox/126.0
17:52:00-177439 DEBUG Server: alive=True jobs=1 requests=37 uptime=109 memory=0.98/31.92 backend=Backend.DIFFUSERS
state=idle
17:54:00-281476 DEBUG Server: alive=True jobs=1 requests=38 uptime=229 memory=0.98/31.92 backend=Backend.DIFFUSERS
state=idle
17:56:00-376863 DEBUG Server: alive=True jobs=1 requests=38 uptime=350 memory=0.98/31.92 backend=Backend.DIFFUSERS
state=idle
17:58:00-481532 DEBUG Server: alive=True jobs=1 requests=38 uptime=470 memory=0.98/31.92 backend=Backend.DIFFUSERS
state=idle
17:59:59-582811 DEBUG Server: alive=True jobs=1 requests=38 uptime=589 memory=0.98/31.92 backend=Backend.DIFFUSERS
state=idle
18:01:59-688014 DEBUG Server: alive=True jobs=1 requests=38 uptime=709 memory=0.98/31.92 backend=Backend.DIFFUSERS
state=idle
I also use "stable-diffusion-webui-directml" and this one works fine.
Don't know how to fix that. Maybe ask in the ZLUDA thread in the help channel on discord (link in github.com/vladmandic/automatic)
DO you know if i can run nice on RX 6650xt?
How many it/s can you get with the RX 6800 non-XT (at default 512 x 512)?
I could get the RTX 2080 Ti or 3070 for the same price where I live, but I'd lose a lot of gaming performance and longevity due to the much lower VRAM.
You just gotta make a judgement call. If AI is your main priority, I would go for nvidia (TBH I regret getting an AMD card now), it/s is gonna be significantly faster than a 6800 and less compatibility issues (unless you don't mind using linux with AMD cards and fiddling with errors until you get it to work). If your main priority is gaming with a bit of AI on the side, AMD is fine for that.
If it keeps using the wrong python version even though you edited the paths try deleting the venv and ...pycache.. folders.
with COMFYUI mu RX6600 can generate 8.5it/s at FHD : 4
AMD made ZLUDA closed source. It's no longer gonna be a thing, but I think the original developer is redeveloping it from scratch.
fking amd man
The version that was released when AMD stopped developing it has been forked so that others can continue without starting from scratch. That's how automatic1111 works with zluda now.
Does sdxl models work on this, have you tried it.
Which model are you interested in? I can try it
What's your gpu? VRAM?
compared to non zluda, how faster is it?
8x
@@northbound6937 Whoa, so fast!! I'll try it, thanks
I'll have to give this a try this evening with my RX7600 (although a different process since I use Fooocus). I am tired of the 5-7 s/it compared to my other PC with RTX3050 that gets around 1 s/it. Luckily I have a 12GB RTX2060 on the way that should speed it up into the it/s range.
is support RX 570 4GB VRAM, or not?
not long ago I tried directML with RX580 and i got about 4s per 1 instruction. It was slow, but worked. Dont know if zluda was supported, but if you are serious about generative AI, upgrade your GPU, its worth it. The newer cards are way more optimized and way more powerful. With 7800XT I now get about 4 instructions per second, so its waaaay faster.
@@bitkarek Man, how do you fix that "RuntimeError: Torch is not able to use GPU;" ?? I got a RX 580 gpu and always have problem using SD
@@DouglasRivitti i would like to help you but everybody has different errors. Radeon software up to date, windows, install all requirements from the txt file... it "should" run. There was some workaround for older cards for the ROCm! be sure to have it, not sure you can run RX580 with Zluda and ROCm... Try that directML version. Its slower, but worked for me.
hmm, i got to a state that after an hour it started to make a picture, but ended in like 5% of that. And it uses zluda on my CPU, not GPU :D nice.
Thanks for sharing 👍👍🙏🙏
Can the AMD Radeon RX580 2048SP graphics card run the program?
I'm not sure, it might. For that follow the steps in the section 'Replace HIP SDK library files for unsupported GPU architectures.' of the guide linked in description (specifically the last bullet point of step 3 of that section, the one who links to 'ROCmLibs.7z'). Good luck!
directml Works. ZLUDA in general also. ( blender etc. ) on that card
"Sheckel time" - pas på med de kanter, kammerat 😁
I got it working on my 6750xt, but I'm not satisfied with the speed so I'm gonna steal my Lil bros rtx 3070ti instead
My anime hentai model is not compatible smh
it seems to me alot of models arent working with this zluda crap...not worth it if its the case.
150 euro? seems better to sell your AMD card, and buy a Nvidia card.
GPU RX7800XT... todo funciono bien la primera vez pero para la segunda ya no abre con webui.bat
Como que te funciono, yo pense que zluda ya habia muerto
67112 Cassandre Harbors