How to fix Automatic1111 DirectML on AMD 12/2023! Fix broken stable diffusion setup for ONNX/Olive

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.ย. 2024
  • Update March 2024 -- better way to do this
    • March 2024 - Stable Di...
    Currently if you try to install Automatic1111 and are using the DirectML fork for AMD GPU's, you will get several errors. This show how to get around the broken pieces and be able to use Automatic1111 again.
    Install Git for windows:
    gitforwindows....
    Install Python 3.10.6 for windows:
    www.python.org...
    be sure to add to path!
    Clone automatic1111 Directml:
    copy url for .git repo
    github.com/lsh...
    run automatic1111 to create virtual environment
    run webui-user.bat file -- it will give an error
    fix errors:
    venv\Scripts\activate
    pip install -r requirements.txt
    pip install httpx==0.24.1
    edit webui-user.bat file inside of automatic1111 folder and add command line arguments and save:
    --use-directml --onnx
    Inside of automatic1111 folder
    find modules\sd_models.py file, edit it
    comment out lines 632 - 635 by putting a # in front of the lines and save file
    close out Automatic1111
    Now you can run Automatic1111 by double-clicking on the webui-user.bat file from windows, or make a shortcut to it if you prefer.
    Automatic1111 should now work the way it used to and should allow optimizing ONNX models.

ความคิดเห็น • 695

  • @PhilsHarmony
    @PhilsHarmony 8 หลายเดือนก่อน +30

    Thanks so much for this video, much appreciated!
    Finally a tutorial that actually got me past the "Torch is not able to use GPU" error. For programmers that might all be easy and self-explanatory, for everyone else it's a real hustle to stand in front of these errors that tell us nothing if we don't speak code.
    What I cannot wrap my mind around is why a multi-billion dollar company like AMD doesn't attach a fix like this at the bottom of their stablediffusion tutorial. They must be aware there's issues for many users during install.
    Anyways, we luckily got helpers like FE-Engineer.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +5

      You are very welcome! Thank you for the kind words and support on TH-cam!
      I am hoping to be able to one day have a working relationship with AMD to be able to help folks even better with AI things as software and changes occur in the fast moving world of AI. Maybe one day? :)

    • @MrRyusuzaku
      @MrRyusuzaku 7 หลายเดือนก่อน

      Tbh even programmers might not be able to get it at a go. Especially if Python is not their thing. One here tho I had a tiny clue, but this video helps a lot

    • @kampkrieger
      @kampkrieger 7 หลายเดือนก่อน

      @@MrRyusuzaku even if python is their thing, you don't just know how this is supposed to work. I get the error that it can not find venv/lib/site-packages/pip-22.2.1-dist-info/metadata, i have no folder site-packaged and I don't know what it is or where it comes from

  • @ml-qq5ek
    @ml-qq5ek 6 หลายเดือนก่อน +5

    Just found out about olive/onnx, Thanks for the easy to follow guide, unfortunately it doesn't work anymore. Will be looking forward to see the updated guide.

  • @joncrepeau3510
    @joncrepeau3510 8 หลายเดือนก่อน +6

    This is the only way with windows and an amd gpu. Other tutorials get stable diffusion running, but it is only on the cpu. I was seriously about to give up hope until i watched this. Thank you

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      Glad it worked for you and you were able to get up and running! Thanks for watching!

  • @adognamedcat13
    @adognamedcat13 7 หลายเดือนก่อน +12

    I was wondering if you could help me with interesting issue. So after following the steps, it kept telling me that the --onnx was an unknown argument. I heard somewhere that with the newest update onnx didn't need to be included as an argument. So I deleted it from the webui-user.bat args line. To my surprise the webui booted as normal, though, there was no sign of olive or predictably onnx. Now I'm getting around 1.5its/sec and I have the same exact card as you.
    on the plus side I have dmp++ 2M Karras now, and it does *technically* work, but the speeds are ridiculously slow.
    Thanks for any/all help and thanks a million for making this series, you're the man!
    Update: to clarify the error I get if I try to launch it the way you described is ' launch.py: error: unrecognized arguments: - '

    • @Vasolix
      @Vasolix 6 หลายเดือนก่อน

      I have same error how to fix that ?

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน +7

      Remove -onnx. They changed code. It is no longer necessary.

    • @williammendes119
      @williammendes119 6 หลายเดือนก่อน +4

      @@FE-Engineer but when sd start we dont have Olive tab

    • @whothefislate
      @whothefislate 6 หลายเดือนก่อน +3

      @@FE-Engineer but how do you get the onnx and olive tabs then?

    • @tomlinson4134
      @tomlinson4134 6 หลายเดือนก่อน +2

      @@FE-Engineer I have the exact same issue. Do you know a fix?

  • @on.the.contrary
    @on.the.contrary 6 หลายเดือนก่อน +7

    hi, I did just as the video and I got this problem "launch.py: error: unrecognized arguments: --onnx". Anyone got and fixed this?

    • @CANDLEFIELDS
      @CANDLEFIELDS 6 หลายเดือนก่อน +5

      Been reading all comments for the past half hour...somewhere above FE-Engineer says that it is not needed and you should delete it....I quote it:
      Remove -onnx. They changed code. It is no longer necessary.

    • @nangelov
      @nangelov 6 หลายเดือนก่อน +1

      @@CANDLEFIELDS if I remove --onnx, I no longer have the onnx and olive tabs and can't optimize the models

    • @ca4999
      @ca4999 6 หลายเดือนก่อน

      @@nangelovSame problem sadly.

    • @nangelov
      @nangelov 6 หลายเดือนก่อน

      @@ca4999 I surrendered and decided to buy an used 3090. There are plenty available in Europe for about 600 Euro and it is like 30 times faster, if not more.

    • @ca4999
      @ca4999 6 หลายเดือนก่อน

      @@nangelovThe sad thing is, I got it somehow to work after 5 hours of work just to realize that the hires fix doesnt work currently with ONNX. Should've went the linux route from the beginning.
      Thats a very solid price for a 3090, congrats ^^ Just out of curiosity because I'm also located in Europe, where exactly did you buy it?

  • @LeitordoRedditOficial
    @LeitordoRedditOficial 6 หลายเดือนก่อน +5

    If you get the error:
    RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
    then add "--use-directml --reinstall-torch" to the COMMANDLINE_ARGS in the webui-user.bat file through notepad
    this way SD will run off your GPU instead of CPU. after use one time, remove --reinstall-torch, remember, is without " ". please share in more videos for help more people.

    • @TPkarov
      @TPkarov 4 หลายเดือนก่อน +1

      Obrigado amigo, você é um amigo !

    • @LeitordoRedditOficial
      @LeitordoRedditOficial 4 หลายเดือนก่อน

      @@TPkarov de nada amigo, sendo sincero com você, o melhor mesmo é gerar imagens 512x512 tenho uma rx 6800 xt e varias vezes quando ponho algo maior que isso, quando está em 99% dá erro, e esperei aquele tempo por nada kkkkkkkkk, mas se for da serie 7000 da amd pode dar certo com imagens maiores.

  • @chris99171
    @chris99171 8 หลายเดือนก่อน +3

    Thank you @FE-Engineer for taking the time to make this tutorial. It helped!

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +2

      Glad that it helped! Thank you for watching and supporting my work. It means the world to me!

  • @zengrath
    @zengrath 7 หลายเดือนก่อน +2

    Dude, you have no idea how long i been trying to get automatic on windows with my 7900xtx and conclusion always has been use linux from everywhere I go. but I seen AMD's post about how it works with windows with olive yet it wouldn't work for me and tried for hours. Your video finally got it working for me. The key part for me was not using the skip cuda command, nothing anywhere i've seen had showed me how to proper fix this until your video. I funny enough didn't have some of errors you did after that but maybe they updated some things since this video or i already installed some of those things already, not sure. thank you so much. I been using Shark and it's such a pain to use, every model change, every resolution change, requires recompiling, every lora and so on, it's a nightmare and it doesn't appear to have as many options as automatic. I hear that we still can't do lora training and all but hopefully that comes later.

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน +1

      Yea. Honestly. I love that shark kinda just works. But I can not stand using it. It takes forever. If you want to just load a model keep an image size and just generate image after image it’s ok. But if you wanna jump around, change models, change images sizes. Then shark is crazy slow.
      You are very welcome! I’m glad you got it working, thank you so much for watching!

    • @zengrath
      @zengrath 7 หลายเดือนก่อน

      @@FE-Engineer I actually switched to comfyUI also thanks to your other video and while it may be a little slower, it's still good enough for 7900xtx and inpainting, img to img, lora's, and all that works which didn't on the automatic one. So much better for me then automatic on windows so far. but hoping it improves even more, i noticed some plugins not working when following a tutorial but at least basics work.

  • @lurkmoar4
    @lurkmoar4 8 หลายเดือนก่อน +8

    Thanks for the tutorial, it's the best one I've seen so far and everything works great

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      You are welcome. The code changed a few days ago and most peoples stuff broke. And depending on what you had it could be fixed several ways. But this seemed the most bulletproof to make a video saying do this and it should work.

  • @scronk3627
    @scronk3627 8 หลายเดือนก่อน +2

    Thanks for this! I ended up not having to comment out the lines in the last step, the optimization worked without it

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      You are very welcome! And that is awesome. I’m seeing mixed comments about it. Some people still run into it. Others seem to not run into it. Probably differences of what code people have pulled. But I’m glad it worked for you and you didn’t have to put in that hacky fix. Thank you for watching!

  • @Ranfiel04
    @Ranfiel04 6 หลายเดือนก่อน +2

    If you're having problems with ONNX tab missing use this command one the stable diffusion folder: git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25
    That revert back the new update that have the problem with the ONNX

    • @tmsenioropomidoro7243
      @tmsenioropomidoro7243 6 หลายเดือนก่อน +1

      This actually helped. You have to load in your created virtual environment (mine is automatic1111_olive), then go to the folder path by cd (mine is F:\stable... etc) then use this git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25 F:\stable...(the rest of the folder`s name). Then you have to do everything shown in the video again (will be much faster because most of the stuff is downloaded already, but requirements and webui-user.bat needs to be edited again)

    • @nielsjanssen2422
      @nielsjanssen2422 6 หลายเดือนก่อน +1

      You two fine gentlemen have gained my respect THANKYOU bro i struggled for hours

    • @user-uz5cg9bu4r
      @user-uz5cg9bu4r 6 หลายเดือนก่อน

      @@tmsenioropomidoro7243 well i thought it worked, the onnx and olive tabs are back, but now i'm getting the error:
      "onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MatMul node. Name:'MatMul_460' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(2476)\onnxruntime_pybind11_state.pyd!00007FFE8EC9B33F: (caller: 00007FFE8EC9CAA1) Exception(6) tid(1a7c) 80070057 The parameter is incorrect."
      when i try to generate txt2img,

    • @tmsenioropomidoro7243
      @tmsenioropomidoro7243 6 หลายเดือนก่อน

      Well I got similar issue, it's not generating yet - shows some errors. Trying to figure out what is wrong
      @@user-uz5cg9bu4r

  • @patdrige
    @patdrige 8 หลายเดือนก่อน +2

    you Sir are the MVP. You not only showed how to install but also showed how to trobule shoot errors step by step. Thanks

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      You are welcome! I’m glad it helped. Thank you for watching!

    • @patdrige
      @patdrige 8 หลายเดือนก่อน

      @@FE-Engineer do you have a guide or plan to have a guide for text2text AI for AMD ?

  • @Wujek_Foliarz
    @Wujek_Foliarz 6 หลายเดือนก่อน +3

    stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\\Users\\igorp\\Desktop\\crap\\stable-diffusion-webui-directml\\venv\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll'
    Check the permissions.

    • @ALKSYM
      @ALKSYM 6 หลายเดือนก่อน

      add "--reinstall-torch" in the args and launch webui-user.bat, after the ui launched, delete the arg " --reinstall-torch", hope it help

  • @dangerousdavid8535
    @dangerousdavid8535 8 หลายเดือนก่อน +3

    You're a life saver i couldnt get the onnx optimization to work but now its all good thanks!

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +3

      Yea. I suddenly started getting a lot of comments about things being broken. So as soon as I really could dig in and figure out how to at least get people up and running I tried to get something to help people get stuff at least with a shot of working for now.

  • @xCROWNxB00GEY
    @xCROWNxB00GEY 8 หลายเดือนก่อน +2

    you are honestly my hero. I am still getting alot of wierd errors but everthing is working.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      Yea. I mean. Fair warning. This literally disables some logic for lowvram flag. Like for real. Stuff could break.
      But maybe some things potentially breaking seems better than “well it straight won’t work” 😂

    • @xCROWNxB00GEY
      @xCROWNxB00GEY 8 หลายเดือนก่อน

      I do prefer it running with constant warnings instead of errors which prevent me from running it.
      Do you still use it this way or are you using an alternative?
      I just started with AI Image and could use any input.
      But because I have an 7900XTX I feel like there are no options.@@FE-Engineer

  • @EscaExcel
    @EscaExcel 8 หลายเดือนก่อน +1

    Thanks, this was really helpful it was hard to find a tutorial that actually gets rid of the torch problem.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +2

      Glad this helped and worked! I agree. It’s difficult to find good information and things that actually work.

  • @user-ni7gv2ty2o
    @user-ni7gv2ty2o 8 หลายเดือนก่อน +1

    Thank you! After 2 days of struggling the problem is gone!

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      I’m glad it helped! Thank you for watching!

  • @nagilacarla325
    @nagilacarla325 6 หลายเดือนก่อน +2

    i have follow all these steps, but when i click to start the web-ui bat they give me this error:::
    stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access denied: 'C:\\Users\\a\\stable-diffusion-webui-directml\\venv\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll'
    Check the permissions.
    and i have alredy removed the --onnx 'cause i see they're no more necessary, so this continue giving me this error. someone could help me?

    • @baka5148
      @baka5148 6 หลายเดือนก่อน

      having same issue here... really hope can find solution soon

    • @nagilacarla325
      @nagilacarla325 6 หลายเดือนก่อน

      I believe I solved it. I saw some topics with similar problems and most of them deleted the venv folder and ran webui.bat again letting cmd recreate the venv folder from 0. I did it and it solved it, it opened right away.@@baka5148

    • @ALKSYM
      @ALKSYM 6 หลายเดือนก่อน +1

      add "--reinstall-torch" in the args and launch webui-user.bat, after the ui launched, delete the arg " --reinstall-torch", hope it help

  • @yannbarral7242
    @yannbarral7242 8 หลายเดือนก่อน +1

    Super helpful, thanks a lot!! The --use-directml in COMMAND ARGS was what I was missing for so long. You helped a lot here. If it can help others with random errors during installation and 'Exit with code 1' , what worked for me was turning off the antivirus for an hour.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      Interesting about the antivirus. Which antivirus do you use?
      Glad this helped. Most folks could probably just swap their command line arguments to -use-directML and it would probably work. Unfortunately when I make a video in order to avoid a mountain of “doesn’t work” comments I try to balance between what will fix it for most folks and I try hard to include information that should fix it entirely for 99.99% of folks. And of course. People have different code from different points in time, different systems, different python versions etc. so I try hard to make sure that if nothing else. If you blow away and start over. This should work and fix your problems. Hence why even when a video could be like 1 minute with 1 small change. It can easily become 10+ minutes with the handful of “and if you happen to see this…” pieces.
      :-/ it is a difficult balancing act.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      Thank you for the kind words, I am glad this helped you. Thank you for watching!

  • @Djangots
    @Djangots 2 หลายเดือนก่อน

    Many thanks ! Your guide was very helpful with just the first 10 minutes

  • @le_crispy
    @le_crispy 7 หลายเดือนก่อน

    I never comment on videos, but you fixed my issue of stable diffusion of not using my GPU. I love you.

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      I’m glad it helped and fixed your problems! Thank you so much for watching!

  • @faridabdurrahman6025
    @faridabdurrahman6025 6 หลายเดือนก่อน +2

    help me i got error when i use argument -onnx it says launch.py: error: unrecognized arguments: --onnx

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน +2

      Remove -onnx. They changed code again.

    • @Spaceguy
      @Spaceguy 6 หลายเดือนก่อน

      @@FE-Engineer works thank you

  • @NewHaven321
    @NewHaven321 7 หลายเดือนก่อน +5

    I get the following error when running the webui-user.bat file, "launch.py: error: unrecognized arguments: --onnx". I can still run if I remove the --onnx parameter but I will have no Olive or ONNX tab in the interface. Appreciate any input here.

    • @MattStormage
      @MattStormage 7 หลายเดือนก่อน +2

      Same here

    • @IJN-Yamato
      @IJN-Yamato 7 หลายเดือนก่อน +3

      Hi! An update has been released and --onnx is now automatically installed and does not require an argument in webui-user

    • @Omen09
      @Omen09 7 หลายเดือนก่อน +3

      @@IJN-Yamato but it doesn't show onnx in gui

    • @GenericYoutuber1234
      @GenericYoutuber1234 7 หลายเดือนก่อน

      @@Omen09 you can fix this now with git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25. This will go back to the old version before the update. You may need to delete your requirements file with the change to add torch-directml before doing it. Then, run the webui-user.bat after changing it to include the command line parameters --use-directml --onnx. This will give you the ONNX tab like before, where you can follow the video from around the 8 minute mark.

    • @FranciscoSalazar-qi4mw
      @FranciscoSalazar-qi4mw 6 หลายเดือนก่อน

      I have the same error, have you been able to fix it? As they say in the comment, it is supposed to be automatic but the onnx does not appear

  • @rikaa7056
    @rikaa7056 8 หลายเดือนก่อน

    thank you man all other tutorials on youtube was useless. CPU was at 99% now you fixed my gpu rx 6600xt is doing the heavy lifting

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      Nice! Glad it helped! Thank you for watching!

  • @Meatbix75
    @Meatbix75 8 หลายเดือนก่อน +2

    thanks for the tutorial. It certainly got SD working for me, which is excellent. however the Olive optimisation doesn't seem to have any effect. I could run the optimisation even without modifying sd_models but it made no difference to performance- I'm getting around 3.3 it/s with either the standard or optimised checkpoint. I've gone ahead and modified sd_models but to no effect. GPU is an RX6700 10GB. CPU is i5 12400F, 32GB RAM.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +3

      Hard to say. I’ve found a lot of issues with the optimization. It’s tricky to even get it to work a lot of the time.
      But if you aren’t seeing any performance increase with it running then my guess is that the model is optimized.
      If you grab other models you might end up seeing the performance boost. It just probably is that the one you have is already optimized.
      You are welcome, thank you so much for watching. Sorry I don’t have a better answer to this.

  • @lenoirx
    @lenoirx 7 หลายเดือนก่อน

    Thanks! After 3 days of trying workarounds, this guide finally worked out!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      Yea the changes they made really kind of were irritating and while they are documented. A lot of people didn’t really see how to fix it easily.

  • @tomaslindholm9780
    @tomaslindholm9780 8 หลายเดือนก่อน

    You were quick in some parts, but the "entire" server restart (terminate batch job Y/N) just hit Ctrl C
    Thank you so much for this fix the guide fix guide. Hero!

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +3

      😂😂 I was not going to make a video. But I decided to start from scratch and figure out all the trouble spots and I was like…mmmm…I’ll get too many comments about people having weird troubles and it’s hard to explain some of it over text.
      And yea. I try not to go too fast but I also try to avoid pointlessly lingering. I tend to record and get a bit too in depth and off topic and in editing I usually cut most of that out. Just the way I naturally talk versus the cleanest way to really do a how to. It’s a process. Plus I really am trying to get it down to more of a reflex and more natural for me to be able to do these without going too far off and also not going too fast. :-/

    • @tomaslindholm9780
      @tomaslindholm9780 8 หลายเดือนก่อน +1

      Well, as a former system engineer I understand you must have a great deal of confidence to do what you did, considering the promising title of your video. Brave and good! Thank you for sharing your skill to the rest of us kamikaze engineers. (BTW, its inside a VM, just make it or break it seems like a good approach) @@FE-Engineer

  • @nickraeyzej578
    @nickraeyzej578 6 หลายเดือนก่อน +3

    This worked great in 12/2023. Latest automatic conversion changes simply do not work and end up corrupted at random. Even when it does work, it makes automatic conversions for every single switch you make to the image resolution. Is there a way to git clone the project version from when this method was perfectly fine, back when we had the ONNX/Olive conversion tab, and one conversion per safetensor covered all resolutions on it's own?

  • @nangelov
    @nangelov 6 หลายเดือนก่อน +2

    Sorry to bother you.
    I've done everything so far, except that when I staart webui, the interface loads but there's no ONNX or OLIVE tabs. Everything is slow on the RX6800XT (1.3s/it) If I enable onnx in the settings, I get missing positional arguments error and I can't generate anything.
    Someone mentioned to roll back to an older UI Version, I don't see how to do that - there are no different versions for this fork.

  • @amGerard0
    @amGerard0 7 หลายเดือนก่อน +2

    This is great! Thanks for the excellent video, I went from ~4s/it to ~2it/s on a 5700XT! so *much* faster!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน +1

      Yay! I’m glad it helped! Thanks so much for watching!

    • @sanchitwadehra
      @sanchitwadehra 7 หลายเดือนก่อน

      my 6600xt went from 1.75 it/sec to 2 it/sec did you do something else could you please give me some recommendations on how you increased it so much

    • @amGerard0
      @amGerard0 7 หลายเดือนก่อน

      @@sanchitwadehra Make sure you have no other versions of Python, only 3.10.6
      When I had other versions it just didn't work, maybe if you have another version it's slowing it down?
      Other than that I'm not sure
      I only use:
      set COMMANDLINE_ARGS=--use-directml --onnx
      If you're using medvram or something, remove it and try again?
      Depending on the model it can be slower - if you're using a really big model that can affect it, certain sampling methods are faster than others too.
      Likewise, if you are trying to generate images bigger than 512x512 (i.e. 768x512) then it will struggle.
      Try another model and see if it's just that, then try every sampling method availible (about 5 worked for me, the others were a total artifact ridden mess).

    • @sanchitwadehra
      @sanchitwadehra 7 หลายเดือนก่อน

      @@amGerard0 maybe it's the python version problem as my pc has latest python version and i installed a1111 using a conda environment with python 3.10.6 and i also have comfyui on my pc in a different conda environment with python 3.10.12 maybe i will try doing the whole process again by deleting everything from my pc thx for sharing

  • @mrsir92
    @mrsir92 7 หลายเดือนก่อน +2

    It starts up for me but throws out a bunch of errors on the UI. I try enabling ONNX but it doesn't do anything. I'm not able to see an ONNX tab.
    "ERROR: Exception in ASGI application"
    Any ideas?

    • @IJN-Yamato
      @IJN-Yamato 7 หลายเดือนก่อน

      Errors in the interface are a bug in the new version. Unfortunately, the author of webui directml has nothing to do with this. They do not interfere with use, but they represent an interface defect.
      About the ONNX tab: I have the same problem. Rolling back to the previous version will help.

    • @mrsir92
      @mrsir92 7 หลายเดือนก่อน

      @@IJN-Yamato Thanks! This helped. Been fight with this for a couple days now 😅

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      Yes. I have started getting random things from folks saying things are not working. I always know code changed when I start getting several of these comments a day. :-/

    • @nielsjanssen2422
      @nielsjanssen2422 6 หลายเดือนก่อน

      ​@@IJN-Yamatohey man, i cant seem to figure out HOW to fall back to an earlier version😅 can you explain to me? Is i on the github of automatic1111 directml? Cuz i cant seem to find an "earlier" version

  • @Verlaine_FGC
    @Verlaine_FGC 6 หลายเดือนก่อน +2

    I keep getting this error:
    "launch.py: error: unrecognized arguments: --onnx"

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน +3

      -onnx is no longer needed. They changed the code. Just omit that from command arguments

  • @MasterCog999
    @MasterCog999 8 หลายเดือนก่อน +1

    This guide worked great, thank you!

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      You are welcome! Thank you for watching!

  • @nourel-deenel-gebaly3722
    @nourel-deenel-gebaly3722 6 หลายเดือนก่อน

    Thanks a lot for the tutorial, it worked but without the onnx stuff unfortunately, patiently waiting for your new video on this matter.

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน

      It’s so much better too!

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน

      Sorry about the wait though. Sick daughter. Sick son. Surgery for son. Hospitalization for son. It’s…busy. Plus work and life and all that. Still I do apologize whole heartedly for the wait.

    • @nourel-deenel-gebaly3722
      @nourel-deenel-gebaly3722 6 หลายเดือนก่อน

      @@FE-Engineer no need to apologize you're literally amazing, hope all goes well for you, although i'll still be using this old and slow method since the new video is for higher cards and I have more of a potato than a gpu 😅, but hopefully I upgrade soon and benefit from this ❤️

  • @hoangduong2065
    @hoangduong2065 6 หลายเดือนก่อน +3

    pls help me, I stuck this Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
    No module named 'keras.__internal__' after making from you

    • @2ubyme
      @2ubyme 6 หลายเดือนก่อน +1

      Same here. I did surrender after trying out for 10 hours. I hope that someone finds out how to fix that. If I some day get that working I won't do any changes.

    • @chilldesigns5256
      @chilldesigns5256 6 หลายเดือนก่อน +1

      any fix?

    • @2ubyme
      @2ubyme 6 หลายเดือนก่อน +1

      @@chilldesigns5256 Nope. I stopped trying until there are more Google results concerning "No module named 'keras.__internal__' ".

    • @wybo
      @wybo 6 หลายเดือนก่อน +1

      same problem, just commenting in the hopes of an answer

    • @karikaturdigital6123
      @karikaturdigital6123 6 หลายเดือนก่อน +3

      try this.. cmd on sd web ui directories
      venv\Scripts\activate
      pip install onnxruntime-directml
      pip install torch-directml
      pip install keras
      pip install tensorflow

  • @Thomas_Leo
    @Thomas_Leo 8 หลายเดือนก่อน

    Thank you so much! This was the only video that helped me. Liked and subscribed. 👌

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      I’m glad this helped! Thank you so much for your support!

  • @nienienie7567
    @nienienie7567 4 หลายเดือนก่อน

    Hey man! Great tutorial! Got any ideas for VRAM Usage optimazitation on AMD? I'm using a modified BAT like below:
    set PYTHON=
    set GIT=
    set VENV_DIR=
    set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
    set COMMANDLINE_ARGS=--use-directml --medvram --always-batch-cond-uncond --precision full --no-half --opt-split-attention --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --use-cpu interrogate gfpgan codeformer --upcast-sampling --autolaunch --api
    set SAFETENSORS_FAST_GPU=1
    it helps a lot but i still wanna squeeze out more, I'm using RX 7600 8gb vram, 32gb ram

  • @markdenooyer
    @markdenooyer 8 หลายเดือนก่อน +2

    Has anyone gotten past the 77 token limit ONNX DirectML on the prompt? I really miss my super-long prompts. :(

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      Not with this version on windows yet. :-/

  • @orestogams
    @orestogams 8 หลายเดือนก่อน

    Thank you so much, could not get this maze to work otherwise!

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      You are welcome! Glad it helped! Thanks for watching and supporting my work!

  • @RobertJene
    @RobertJene 6 หลายเดือนก่อน +1

    10:20 use Ctrl+G to jump to a specific line in notepad

  • @Azure1Zero4
    @Azure1Zero4 8 หลายเดือนก่อน

    Thanks a lot. Something to note is if you don't want onnx mode enabled just exclude it from the arguments.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      This is true. Removing ONNX allows the other samplers to be used. But for AMD users. The performance hit is a big one.

    • @Azure1Zero4
      @Azure1Zero4 8 หลายเดือนก่อน

      That's true. When I try running ONNX converted models it wont let me adjust the size of the image for some reason and they don't seem to be producing results nearly as good as non-converted.@@FE-Engineer

    • @Azure1Zero4
      @Azure1Zero4 8 หลายเดือนก่อน

      I think I might have figured out my issue. I think I'm maxing out my ram and its crashing the CMD prompt mid optimizing. Do you think you could do me a favor and tell me about how much system ram you use when going through the optimization process? Going to upgrade and need to know how much.@@FE-Engineer

    • @Azure1Zero4
      @Azure1Zero4 7 หลายเดือนก่อน

      In case anyone need to know I required 32GB of ram to optimize models. So if you don't have that much your going to need to upgrade or download an already optimized model. Something I had to learn the hard way. Hope this helps someone.

  • @magnusandersen8898
    @magnusandersen8898 6 หลายเดือนก่อน +3

    I've followed all your steps up untill the 8:00 minute mark, where I after running the webui-user.bat file, get an error saying "launch.py: error: unrecognized arguments: --onnx". Any ideas how to fix this?

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน +5

      Remove -onnx

  • @user-kj5ux9ms6q
    @user-kj5ux9ms6q 7 หลายเดือนก่อน

    Thank you so much. You have helped so many people with this video!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      I’m glad it helped you!! Thanks so much for watching!

  • @Daxter250
    @Daxter250 8 หลายเดือนก่อน

    that was... the best AND ONLY tutorial i found that worked. my 5700xt had no problems with stable diffusion half a year ago and then suddenly puff, some bs about tensor cores which i dont even have. all those wannabes on the internet simply said to delete venv and it will sort itself out. NO IT DOESNT.
    this tutorial here does! thanks for the work you put in!
    btw. with those onnx and olive models i even turned the speed from fucking seconds per iteration to 2 iterations per second O.o, while also increasing the image size!

    • @DGCEO_
      @DGCEO_ 8 หลายเดือนก่อน

      I also have a 5700xt, just curious what it/s you are getting?

    • @Daxter250
      @Daxter250 8 หลายเดือนก่อน

      @@DGCEO_ 2 it/s as written in the last sentence. image is 512x512.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      I’m glad this helped! Thank you so much for the kind words! :) and thank you for watching!

  • @Maizito
    @Maizito 6 หลายเดือนก่อน +1

    I finally manage to run SD with your tutorial, I have an Rx7000, it didn't let me run with --onnx, I saw that in the comments they mention that that command is no longer necessary, so I removed it from the user-bat, and it opens the SD , but it goes very slow, it works between 1.5 and 2.5 it/s, any solution to make it go fast?

    • @W00PIE
      @W00PIE 6 หลายเดือนก่อน

      That's exactly my problem at the moment with a 7900XTX. Really disappointing. Did you find a solution?

    • @Maizito
      @Maizito 6 หลายเดือนก่อน +1

      @@W00PIE No, I haven't found a solution yet :(

  • @pack9694
    @pack9694 7 หลายเดือนก่อน

    thank you for helping me fix the olive issue you are amazing

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      I’m glad this helped! Thank you so much for watching!

  • @evilivy4044
    @evilivy4044 8 หลายเดือนก่อน +3

    Great tutorial, thank you. How do you go about using "regular" models with the --onnx argument? Do I need to convert them, or should I look for and use only ONNX models?

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      Have to convert them basically. Occasionally you can find some models in ONNX format but it is not really super common…

  • @jordan.ellis.hunter
    @jordan.ellis.hunter 7 หลายเดือนก่อน

    This helped a lot to get it running. Thanks!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      You are very welcome! Thank you so much for watching. Glad it helped!

  • @NA-oe5jj
    @NA-oe5jj 7 หลายเดือนก่อน

    you solved the exact problems i had. thanks for the true best tutorial.

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      You are welcome, I am glad it helped! Thanks for watching

    • @NA-oe5jj
      @NA-oe5jj 7 หลายเดือนก่อน

      @@FE-Engineer woke up today to it no longer working.
      why computers be like this. :D
      when i attempt to use webui-users
      it says installing requirements then
      *** could not load settings.
      then tries to launch anyway and starts to complain about Xformers and Cuda.
      i think this settings load is the issue. ima fiddle at lunch and then after work tonight, i will do a complete reinstall again using your handy guide.

  • @DarkwaveAudio
    @DarkwaveAudio 8 หลายเดือนก่อน

    Thanks man you helped a lot. much appreciated for your time and effort.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      You are welcome! Thanks so much for watching!

  • @metaphysgaming7406
    @metaphysgaming7406 7 หลายเดือนก่อน

    Thanks so much for this video, much appreciated!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      You are welcome I hope it helped! Thanks for watching!

  • @miosznowak8738
    @miosznowak8738 7 หลายเดือนก่อน

    Thats the only solution I found which actually works, thanks :))

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      I’m glad it helped and got it running :). Thanks so much for watching!

  • @obiforcemaster
    @obiforcemaster 4 หลายเดือนก่อน +2

    This no longer woks unfortunately. The --onnx comand line argument was removed.

  • @JustisKai
    @JustisKai 7 หลายเดือนก่อน +1

    Everything runs fine until the final launch where i get launch.py: error: unrecognized arguments: -onnx. Any advice?

    • @IJN-Yamato
      @IJN-Yamato 7 หลายเดือนก่อน +1

      --onnx is now automatically installed and does not require an argument in webui-user

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน

      Thanks!

  • @rivariola
    @rivariola 2 หลายเดือนก่อน +1

    hello sir I keep getting an error that is driving me nuts:
    DLL load failed while importing onnxruntime_pybind11
    do you know what it means?

  • @davados1
    @davados1 6 หลายเดือนก่อน +1

    Thank you for the tutorial. So I got the webui to load up but I don't have ONNX and Olive tab at the top just not there oddly. Would you know why has webui changed and removed it?

  • @Hozokauh
    @Hozokauh 7 หลายเดือนก่อน +2

    at 7:00, you got it to skip the torch/cuda test error finally. for me however, it did not resolve the issue. went back and followed the steps twice over and same result. still getting the torch cuda test failure. any ideas?

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      -use-directml in your startup script

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน +1

      I did not skip the torch and cuda test. From my experience if you are having problems and skip it. It will never work because that test is designed to simply check if it thinks it can run on the GPU.

    • @Hozokauh
      @Hozokauh 7 หลายเดือนก่อน

      @@FE-Engineer thank you for the timely feedback! You are the best. Will try this out!

  • @lucianoanaquin4527
    @lucianoanaquin4527 8 หลายเดือนก่อน +1

    Thanks for the amazing tutorial bro! I only have one question, watching other videos I noticed that they have more sampler options, what do I have to do to have them too?

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +3

      The other samplers don’t work in this version with onnx and directml.
      So options are. Run ROCm on Linux.
      Or wait for ROCm on windows when we can just use the normal automatic1111 without needing directML and onnx.

  • @mobas07
    @mobas07 8 หลายเดือนก่อน +1

    Whenever I try to optimise any model for olive it gets to this part then gives an error:
    [2024-01-07 16:19:04,824] [INFO] [engine.py:929:_run_pass] Running pass optimize:OrtTransformersOptimization
    Press any key to continue . . .
    Anyone know how to fix it?

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      Strange. I think someone else mentioned this. I might have to dig in and see what’s going on or if I can recreate this.
      I finally got stuff I wanted done on my website. So that is reasonably in a good place finally so I can start now getting back to making videos!

    • @zerohcrows
      @zerohcrows 7 หลายเดือนก่อน

      did you ever fix this?

    • @mobas07
      @mobas07 7 หลายเดือนก่อน +1

      Nope

  • @ALKSYM
    @ALKSYM 6 หลายเดือนก่อน

    for all the person that have the problem where it says "launch.py: error: unrecognized arguments: --onnx press any key"
    remove the --onnx in the args
    and if after that it says something like "stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Accs refus:"
    add "--reinstall-torch" in the args, launch webui-user.bat, and after it start, remove the " --reinstall-torch" !

  • @mjtech1937
    @mjtech1937 8 หลายเดือนก่อน

    This a great tutorial. The it/s speeds I'm getting with my AMD 7900 XTX are sick, faster than Midjourney, The only question I have is; has anyone got inpainting working? Otherwise this is an amazing solution for AMD users.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      It works without issues if you use ROCm on linux, speed overall for myself takes maybe 10% hit or so.
      Unfortunately this is using DirectML and ONNX with a lot of optimizations in place. Those same technologies though are somewhat less developed as far as extensions and things just working.
      So basically, until ROCm is on windows, you kind of have to pick your poison.
      Dual boot system and running linux, or the variations of different ways to do it on windows of which all have some serious drawbacks.

  • @astarwolfe1411
    @astarwolfe1411 8 หลายเดือนก่อน +1

    I’m not sure if this is a batch issue or a computer issue, but after getting the error and the “press any key to continue…” when I press any key the prompt closes immediately and doesn’t let me type anything in

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      I saw someone else mention something similar. I’ll go in and take a look here when I get some time.
      I’m not sure if something maybe changed?

    • @semirvin
      @semirvin 7 หลายเดือนก่อน +1

      i found a solution for that. When you try to double click and run webui-user exe, console will immediately close. Try to run it with cmd promt. Like exactly on this video. After that console wont close.

  • @waltherchemnitz
    @waltherchemnitz 3 หลายเดือนก่อน

    What do you do if when you run Venv you get the message "cannot be loaded because running scripts is disabled on this system"? I'm running the terminal as Adminstrator, but it wont let me run venv.

  • @dr.bernhardlohn9104
    @dr.bernhardlohn9104 7 หลายเดือนก่อน

    So cool, many, many thanks!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      Glad it helped! Thank you for watching!

  • @amrkhaled5806
    @amrkhaled5806 7 หลายเดือนก่อน

    Great Video. Finally, it works after three days of watching tutorials and searching the internet. I have a small issue though. When generating images it uses my iGPU instead of my AMD GPU, I've tried adding this argument --device-id 1 to the webui-user file, now it uses my AMD GPU however I've noticed in the task manager that it spikes to 100% for a second then it returns back to 0% then back to 100% and so on after that the AMD software pops up with a report an issue button and the image comes out grey. What causes this problem and how do I fix it? P.S. I have an AMD Radeon 530 GPU

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      Might try some of the settings like medvram.
      Is it just the GPU that is spiking hard?
      It sounds like it is actually overloading the GPU and then the GPU is basically crashing.
      I have not encountered this personally. So it is hard for me to say for sure. But try some of the other vram settings and also potentially ram setting to see if that helps.

  • @aadilpatel6591
    @aadilpatel6591 8 หลายเดือนก่อน

    Great guide. Thanks.
    What are the chances that we will be able to use reactor (face swap) or animatediff with this repo?

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      You are welcome! Thank you for watching! My guess is not very good…most of the extensions don’t play well with ONNX and directml. Plus my guess is that no one is really working on trying to get them to work with ONNX and directml really. :-/
      You can always try. I just have had very little luck with very many extensions that like “do things”.

    • @aadilpatel6591
      @aadilpatel6591 8 หลายเดือนก่อน

      @@FE-Engineer will they be usable once ROCm is ready for windows?

  • @NXMT07
    @NXMT07 6 หลายเดือนก่อน

    Thanks for the tutorial, it really did worked with my rx580, albeit very slow.
    Can you please make a tutorial on how to use huggingface diffusers with automatic1111? I've tried to find the safetensors file and even converted the diffusers into one but to no avail.

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน +2

      Last I knew. Most of the additional pieces of automatic 1111 will not work with ONNX. They might work with only directml. But it has a big performance penalty.
      Overall for AMD. Your best bet right now is ROCm on Linux. Slightly slower than onnx and olive but all the functionality works correctly. Also nice that you don’t have to fiddle with converting to onnx and the headache that comes with all of that and what does and does not work etc. :-/

    • @NXMT07
      @NXMT07 6 หลายเดือนก่อน

      @@FE-Engineer well I heard that Zluda is enabling CUDA on amd GPU so OONX shouldn't be a problem after a period of development in WindowOS. I have managed to play around with it and can confirm it does indeed work with CUDA-related programs, haven't got it to work with Automatic1111 though.
      Still, my trouble with the huggingface diffusers remains unsolved, I think it is a entirely new problem

  • @Doomedjustice
    @Doomedjustice 7 หลายเดือนก่อน

    Hello! Thank you very much for the tutorial, it really helped. I wanted to ask is there any way to use generic sampling methods that are usual for Automatic1111?

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      You have to drop ONNX. But you will take a big performance hit. Or use ROCm on Linux.

  • @ktoyaaaaaa
    @ktoyaaaaaa 5 หลายเดือนก่อน

    Thank you! it worked

    • @FE-Engineer
      @FE-Engineer  5 หลายเดือนก่อน

      :):) glad you got it working! Thank you for watching!

  • @mgwach
    @mgwach 8 หลายเดือนก่อน

    Thanks!! Got everything up and running. Question though.... do you know if LoRAs are supposed to work with Olive yet?

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      No idea. My guess would be no.
      And to be clear. I am 99% sure ONNX does not care but automatic1111 with directML is probably not setup to support it most likely.

    • @mgwach
      @mgwach 8 หลายเดือนก่อน

      @@FE-Engineer Gotcha. Okay, thanks for the response. :) Yeah it seems that whenever I select a LoRA it's not recognizing it at all and none of the prompts make any difference for it.

  • @Slavius84
    @Slavius84 2 หลายเดือนก่อน +1

    I have rx570, but it so slow. I don't know what to do with this.
    512 X 512 is generated almost 5-6 min. It's meant what I need to buy new video videokard?

    • @ethanwebb6122
      @ethanwebb6122 25 วันที่ผ่านมา +1

      Probably using CPU

  • @pyrageis9928
    @pyrageis9928 6 หลายเดือนก่อน

    I get an error stating "AttributeError: module diffusers.schedulers has no attribute scheduling_lcm. Did you mean: 'scheduling_ddim'?"
    edit: just had to delete venv folder

  • @arcadiandecay1654
    @arcadiandecay1654 7 หลายเดือนก่อน

    This has been a lifesaver, thanks! One thing I did notice after I got this working (perfectly, actually) is that there are some sampling methods missing, like DPM++ SDE Karras. Do you know if that's that something that could be manually installed? I tried doing a git clone of the k-diffusion repo and doing a git pull but that didn't get them to show up.

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน +1

      Yea. They don’t work with ONNX. :-/

    • @arcadiandecay1654
      @arcadiandecay1654 7 หลายเดือนก่อน

      Oof lol. Thanks! Well, I'm going to count my blessings, since I was floundering before finding this tutorial. I have Linux on a couple other disks and one of them is Ubuntu, so I'm going to install it on that, too.

  • @michaelbuzbee5123
    @michaelbuzbee5123 8 หลายเดือนก่อน

    I was having trouble with my A1111 being slow so searching around I found your fix video and decided to do just a clean install. I already downloaded a bunch of models though, how does one run them through onnx? And I am assuming I can no longer just add the models to the stable diffusion folders anymore? I think my PC specs are the same as yours.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      So you need to optimize them for Olive and ONNX.
      I have a pretty short video about this.
      You should be able to just optimize them from your normal models folder. Once optimized they will be in onnx or olive-cache I think are the folder names.
      But yes you can use them. Just not SDXL models. I have yet to get SDXL to work correctly with directML and ONNX. :-/

  • @EricFluffy
    @EricFluffy 8 หลายเดือนก่อน +1

    Is there a fix for AssertionError when trying to optimize SDXL models? It works perfectly fine for SD and ONNX models, but it can't seem to optimize SDXL models

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      I have never gotten sdxl models to optimize correctly. So no fix that I am aware of. And I tried a fair number of things. :-/

    • @EricFluffy
      @EricFluffy 8 หลายเดือนก่อน

      ​@@FE-Engineer I see. Is there any chance you could make a tutorial on how to convert Civitai models based on AMD's latest AI blog post where they outline using Olive and the DirectML extension? It seems like Olive can optimize SDXL models, but it currently doesn't work with the extension, and for the life of me I can't figure out how to make it work with locally downloaded/Civitai models. It seems like more tedium, but if it can convert SDXL models, I'm alright with it.

    • @EricFluffy
      @EricFluffy 8 หลายเดือนก่อน

      Also, I'm running into a weird issue where embeddings aren't showing up at all in the textural inversions tab. Tried removing all from my device and a different drive, the same message telling me where to put embeddings show up.

  •  4 หลายเดือนก่อน

    Any method works for me, I have this error: AttributeError: module 'onnxruntime' has no attribute 'SessionOptions'

  • @DrMacabre
    @DrMacabre 7 หลายเดือนก่อน +1

    hi, for some unknow reason, i'm getting "launch.py: error: unrecognized arguments: --onnx" everything was working yesterday, i reinstall windows and stable diffusion on a new SSD and now i'm getting this error. No typo in the bat file

    • @DrMacabre
      @DrMacabre 7 หลายเดือนก่อน

      Luckily, i saved yesterday's install and it's working. No idea why today's install doesn't. That's kinda weird. Anyone managed to load SDXL models with this?

    • @IJN-Yamato
      @IJN-Yamato 7 หลายเดือนก่อน +1

      Hi! An update has been released and --onnx is now automatically installed and does not require an argument in webui-user

    • @IJN-Yamato
      @IJN-Yamato 7 หลายเดือนก่อน

      @@DrMacabre there is an opportunity to get your (previous version, that is, which you currently have installed) stable diffusion? After today's update, I can't use the new version of stable diffusion.

    • @DrMacabre
      @DrMacabre 7 หลายเดือนก่อน

      @@IJN-Yamato sure, i'll check the size to see if i can upload it somewhere.

    • @Omen09
      @Omen09 7 หลายเดือนก่อน +1

      you can get the old one with git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25

  • @such-ej
    @such-ej 6 หลายเดือนก่อน

    Unfortunately it doesn't work for my RX 5700. I've reached "olive tab" step, it fails to optimise models.
    For future tutorials better use certain git commits because it seems like developers like to do breaking changes for AMD users.

  • @user-cw8pm3ox1q
    @user-cw8pm3ox1q 7 หลายเดือนก่อน

    Thanks so much for the Video! I wonder why do I need Internet connection when converting "normal" models(with safetensors file extension name). Due to my poor Network, python always raises "ReadTimeout" error whenever I click the "Convert & Optimize checkpoint using Olive" button. Do I need to download something else to convert a model? I think I only need my own GPU to compute.

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      That is interesting. I did not know it needed to get anything from the internet. I am not sure to be honest.
      Are you running it on like an old spinner hard drive? Is it possible that the read timeout is from your disk drive?

  • @chrisc4299
    @chrisc4299 8 หลายเดือนก่อน

    Hello thank you very much for the video I have a question how I could use vae with the optimized models you have to transform them I appreciate your help since placing the vae in the regular folder does not apply to the generation

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      You will need to run ROCm in Linux to get full functionality like that.

  • @BOIWHATmusic
    @BOIWHATmusic 7 หลายเดือนก่อน +1

    Im stuck on installing the requirements line, taking a really long time. Is this normal?

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      Depends on internet connection and some other things. But yes. It is not exactly fast.

  • @PuMa10w
    @PuMa10w 6 หลายเดือนก่อน +1

    launch.py: error: unrecognized arguments: --onnx on the final step - what should i do now?

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน

      Remove -onnx

    • @DJ_Kie
      @DJ_Kie 2 หลายเดือนก่อน

      @@FE-Engineer bruv it took me like 20 minutes to work out what you meant haha. @puma10w you need to edit the webui-user.bat file and change the --use-directml --onnx and remove the --onnx part. If you are a dum dum like me i hope this helped

  • @gyrich
    @gyrich 6 หลายเดือนก่อน

    Thanks for this. I can actually run SD on my AMD pc but it doesn't seem that it's using the GPU (RX 6600 8gb) at all. I can render individual images in ~30-60 secs. None of the solutions I've found online make it use the GPU. Do you know how I can get it SD to use the GPU so I can generate more/faster?

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน +1

      Yes. Stay tuned. I have a new video coming out because the code has changed a decent amount and there is a better way now!

  • @user-db9pl9oh4b
    @user-db9pl9oh4b 7 หลายเดือนก่อน

    Thanks for the video. May I ask what's your GPU and how's the performance? Cheers!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      7900 XTX :)
      Onnx/olive - 22 it/s
      ROCm - 18 it/s
      DirectML non ONNX - 6 it/s

    • @user-db9pl9oh4b
      @user-db9pl9oh4b 7 หลายเดือนก่อน

      @@FE-Engineer I'm interested to know if you really need an Nvidia GPU or AMD. Perhaps a good video to make in the future where you compare the two GPU makers? Thanks!

  • @mrhobo7103
    @mrhobo7103 8 หลายเดือนก่อน

    great tutorial, mine stopped working a few days ago and coudlnt find a fix anywhere, although for some reason generating an image makes my pc slow to a crawl and it didn't do that before it broke. the image generation itself is still fast though. 6600XT

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      Image generation makes your pc slow down? Interesting. Did you previously use any unusual flags?

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      I would not be surprised about this during like model optimization. But image generation it does surprise me a bit…

    • @macnamararj
      @macnamararj 8 หลายเดือนก่อน

      @@FE-Engineer same here, it slow down too, the non onnx/olive this didnt happens.

  • @Guillermo-th4dh
    @Guillermo-th4dh 8 หลายเดือนก่อน

    Hello sensei, I tell you that the entire tutorial is 10 out of 10... I just wanted to ask you, I have a problem when I want to optimize SDXL models, even other custom ones from "civitai" and they give me an error. What could it be?
    thanks

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      I have not been able to get SDXL working on this setup with automatic1111 directml. I tried a few months ago and could not get it working. I have not honestly tried with it recently.
      I also run ROCm on Linux and that setup just basically works for everything. So I did that to get SDXL and bypass all the complexity of what you can and can not do with directML and ONNX.

  • @wilcoengelsman8159
    @wilcoengelsman8159 6 หลายเดือนก่อน

    Thank you for the guide, it is however already slightly outdated. I did manage to get everything working though using this tutorial.
    When I use Olive/ONNX instead of just directml my image has a lot more noise, even on the same sampler. Is there something i can do about that? Also, generation larger than 512x512 crashes the onnx implementation.

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน +1

      So you don’t need to use -onnx anymore in the command argument when launching.
      When using ONNX it has a lot of peculiarities and most things other than generating an image do not work properly with ONNX sadly.

  • @acho97x
    @acho97x 18 วันที่ผ่านมา

    At cmd, when I press any key it closes the windows. Which key do you press?

  • @ViniCmpss
    @ViniCmpss 3 หลายเดือนก่อน

    this error
    raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
    AttributeError: module 'torch' has no attribute 'dml'

  • @user-mg7fv9cx8b
    @user-mg7fv9cx8b 8 หลายเดือนก่อน

    Thanks for your video. You are my hero :-) I thought I never get SD on my AMD running - until I saw your video...
    I tried also to use another checkpoint - stable-diffusion-inpainting.
    I was able to download the model, the log says: Model saved: C:\..\sd-test\stable-diffusion-webui-directml\models\ONNX-Olive\stable-diffusion-inpainting
    When I try to use that model I get RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node.
    When I try to optimize the model I get "...\sd_olive_ui.py", line 358, in optimize assert conversion_footprint and optimizer_footprint
    AssertionError
    Is it somehow possible to use inpainting model on AMD? Or what am I doning wrong?

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      So you can definitely do it with ROCm on Linux.
      In windows I haven’t been able to get inpainting working properly.

  • @Grendel430
    @Grendel430 7 หลายเดือนก่อน

    Thank you!

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      No problem! Thanks for watching!

  • @Justin141-w3k
    @Justin141-w3k 8 หลายเดือนก่อน

    This is the only tutorial that has worked.

    • @Justin141-w3k
      @Justin141-w3k 8 หลายเดือนก่อน

      New issues. I managed to generate an image of a car though.

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      You are seeing new issues?

    • @Justin141-w3k
      @Justin141-w3k 8 หลายเดือนก่อน

      Regarding the only valid links being hugging face.@@FE-Engineer

    • @Justin141-w3k
      @Justin141-w3k 8 หลายเดือนก่อน

      After optimizing I receive this error:
      InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from C:\AI\stable-diffusion-webui-directml\models\ONNX-Olive\stable-diffusion-v1-5\unet\model.onnx failed:Protobuf parsing failed.@@FE-Engineer

  • @lake3708
    @lake3708 6 หลายเดือนก่อน

    An excellent guide, but I have a question: there's a .safetensors checkpoint that has a config attached in the .yaml format. After optimization, the program stops seeing the config and generates noise. Do you have any idea how to fix this problem?

    • @FE-Engineer
      @FE-Engineer  6 หลายเดือนก่อน

      Ohh. Not sure on that one. But I have a new video coming out with a much better way of doing this!

  • @n3mesis633
    @n3mesis633 5 หลายเดือนก่อน

    Question: When my cmd opens after I put the torch direct ml script, it says press any key to continue. However, whenever I do that, it closes itself. Any thoughts?

    • @FE-Engineer
      @FE-Engineer  5 หลายเดือนก่อน

      Read the video description. The code has been updated. You might want to use zluda if you are on AMD.

  • @JennyJenny-ur1fo
    @JennyJenny-ur1fo 2 หลายเดือนก่อน +1

    Thanks for the video. Please create a video on how to fix the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check when installing from this resource: lllyasviel/stable-diffusion-webui-forge

  • @TheBrainAir
    @TheBrainAir 4 หลายเดือนก่อน

    i do all steps and / AttributeError: module 'torch' has no attribute 'dml'

  • @chaitanyamore8786
    @chaitanyamore8786 5 หลายเดือนก่อน +1

    Can u put how to use dreambooth with amd
    😅 its not supporting xformer torch many time im trying on different version its getting tough n tough

    • @FE-Engineer
      @FE-Engineer  5 หลายเดือนก่อน +1

      I used dreambooth on Linux with amd. You might be able to do it with zluda. Maybe

    • @chaitanyamore8786
      @chaitanyamore8786 5 หลายเดือนก่อน

      @@FE-Engineer do that work on torch 1.31.1 ?
      On window amd? Old dreambooth?

  • @nextgodlevel
    @nextgodlevel 8 หลายเดือนก่อน

    great tutorial, but i have a doubt that when I try to optimize some other stable diffusion model, they optimize correctly but the output image they gave is not very clear its always generate some foggy images.
    Also I can't able to generate images which have size greater then 512 x 512, and the otherway I do this is by upscale the resolution of 512x512 images within in the stable diffusion and its giving very good output aswell.
    my GPU: 6750XT

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน +1

      If the image looks foggy like that. It likely means you need to run a vae with the model. I don’t remember offhand if I was ever able to get a vae to work properly with auto1111 on windows though. Sorry.

  • @sanchitwadehra
    @sanchitwadehra 7 หลายเดือนก่อน

    wow thanks dhanyavad

    • @FE-Engineer
      @FE-Engineer  7 หลายเดือนก่อน

      You are very welcome! Thanks so much for watching!

  • @mojlo4ko998
    @mojlo4ko998 8 หลายเดือนก่อน +1

    legend

    • @FE-Engineer
      @FE-Engineer  8 หลายเดือนก่อน

      😂 thank you! I hope this helped!