ForserX/StableDiffusionUI 01 : Stable Diffusion Using ONNX The Easiest Way For AMD GPU Users

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ก.ค. 2024
  • Installing Python 3.10.6
    www.python.org/downloads/rele...
    Installing Git
    git-scm.com/download/win
    Installing Stable Diffusion XUI
    github.com/ForserX/StableDiff...
    Donate me at
    Ko-fi: ko-fi.com/tongtong
    Paypal: tongtong83@gmail.com
    What kind of video do you want to see next? Do let me know
    Timecode
    0:00 - Intro
    0:08 - Stable Diffusion XUI ONNX Demo
    0:44 - Sampling Method Test Using The Same Seed
    1:21 - Installation Pre-requisite
    1:53 - Actual Stable Diffusion XUI Installation
    2:37 - Converting Model To ONNX Usable Format
    3:01 - Size Comparison Between Default and ONNX Model
    3:09 - Things To Take Note During Conversion
    This is Stable Diffusion for #AMDGPU Windows users and you are currently watching Stable Diffusion using #ONNX the easiest way.
    Right now, I'm going to show you a quick demo on how it works. All you have to do is click XUI executable file and type in the prompt. For this example, I'm going to use 1girl as a demo. As you can see, it only took 10.85 secs to generate a 512 times 512 pixel images and I'm using #AMD RX 6600 XT. Times to generate the image vary depending on the AMD GPU you use. I only shown mine as a quick reference.
    I have tested all the available sampling methods for the animatrix V20 model using the same seed for comparison. In my testing, all looks great except PNDM and LMSDiscrete. All the images take 10 to 11 seconds to generate except DPM Discrete at 20.79 seconds and Heun Discrete at 20.81 seconds. What do you think? Feel free to do your own experiment with the sampling methods.
    Before you begin, make sure you have already install the #Python 3.10.6 and #git.
    For #python, copy and paste the link from the description below. Scroll all the way down until you see Windows installer 64-bit. Download and install it. Just make sure that you select add #Python 3.10.6 to path in the checkbox.
    For #git, paste the link from the description as well. Download the 64-bit version and install it normally.
    Here comes the best part you all are waiting for. At the time of the recording, the latest version is 3.3.2 but please do not use this one as it has missing EXE file. Instead, download the version 3.3.1. Once you download the file, just click the XUI executable file as you have seen in the beginning of the video. That's it and it will install everything for you. Just make sure you have internet connection with you, no pun intended. Once everything is done, you will be greeted with the GUI interface and you don't have to use web browser at all.
    The next part is converting your existing model into ONNX usable format. First, click the Import at the Model column. A Model Importer will pop out. Click and drag your existing model you download to the Model Importer and click convert! Please bear in mind that it will take a really long time. I did not record the time as it was too unbearable to wait so I just go and do something else. The size of the converted model will be significant bigger as well.
    During the model conversion process, you will stuck at startup extract screen for a really long time so do not fret, it's totally normal. You will see SD done ONNX as shown in the screen when the conversion is complete. Now you can use the ONNX stable diffusion as usual. If you like my video, do consider to donate at my paypal and ko-fi page. It really helps me a lot to produce more useful video like this in the future. Thanks.
    #XUIAMD #AIForAMD #StableDiffusionForAMD #SDAMD

ความคิดเห็น • 36

  • @tongtongrizhi
    @tongtongrizhi  4 หลายเดือนก่อน

    Installing Python 3.10.6
    www.python.org/downloads/release/python-3106/
    Installing Git
    git-scm.com/download/win
    Installing Stable Diffusion XUI
    github.com/ForserX/StableDiffusionUI/releases
    Donate me at
    Ko-fi: ko-fi.com/tongtong
    Paypal: tongtong83@gmail.com

  • @tongtongrizhi
    @tongtongrizhi  4 หลายเดือนก่อน +5

    Seems like very little people are interested in the latest video on Stable Diffusion ONNX. While it's not the latest and greatest, I always make sure everything is easy to understand and keep things simple for AMD GPU users. Has everyone jumps ship to Nvidia?

    • @doseofjean
      @doseofjean 4 หลายเดือนก่อน +2

      I love amd, thanks

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      @@doseofjean thank you so much 🥹

    • @konstabelpiksel182
      @konstabelpiksel182 3 หลายเดือนก่อน

      your video is easy to understand. was using rx6600 with directml. basic stuff is ok but doing extra with controlnet etc and having to convert models will drive me crazy. no choice but to switch to nvidia :(

    • @tongtongrizhi
      @tongtongrizhi  3 หลายเดือนก่อน

      @@konstabelpiksel182 I dun hv money to switch so I still stuck with AMD since I'm jobless now. Nvidia is the mainstream for AI, AMD still has limited support for it

    • @rwarren58
      @rwarren58 3 หลายเดือนก่อน

      No way. I want to hear about SDXL and AMD and if it's possible. I have an older card but can still run SD. I just worry about messing up that install and converting files is a bit time consuming, but for a better result.? Let's try it.

  • @TdclatinsportsBlogspot
    @TdclatinsportsBlogspot 9 วันที่ผ่านมา +1

    *Works for AMD ??? i wanted to installed Stable Diffusion but i had a lot of troubles installing that shit and nothing, i'd can't*

  • @canakar1657
    @canakar1657 หลายเดือนก่อน

    Greetings, thank you for the video. I used Epicrealism as the model and it converted without any problems. But he did not convert a model ending in XL. Will I not be able to use XL models or am I missing something?
    OSError: Error no file named model_index.json found in directory C:/Users/random/Desktop/folder/models/diffusers/ponyDiffusionV6XL_v6StartWithThisOne.

    • @tongtongrizhi
      @tongtongrizhi  หลายเดือนก่อน

      It didn't work with sdxl model. Use sd 1.5 model only

    • @canakar1657
      @canakar1657 หลายเดือนก่อน

      ​@@tongtongrizhi thanks for the answer. So, if I want to use SDXL models, will a 6700xt with 12GB VRAM be sufficient? I can produce 512x512 models in seconds, but, it takes couple of minutes to produce a 1024x1024 model. Is this normal? The xl models give better results, do you have any advice for me to produce them faster with 12GB VRAM?

    • @tongtongrizhi
      @tongtongrizhi  หลายเดือนก่อน

      @@canakar1657 Yes, sdxl model takes longer time to generate and that is totally normal. I couldn't see any prominent result between sd 1.5 and sdxl model so I only use sd 1.5 models. Unfortunately I don't think there is any solution to generate sdxl image faster, maybe using rocm in Linux can solve the issue but not everyone willing to take this route

  • @rekii848
    @rekii848 4 หลายเดือนก่อน

    i want to ask if u have ever use directml + olive, and if yes which one is faster??

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      Olive/onnx faster but the sampler methods and inpainting sucks really big time. Fastest to slowest is onnx > zluda > directml

    • @rekii848
      @rekii848 4 หลายเดือนก่อน

      i see thanks for info 🥰@@tongtongrizhi

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน +1

      @@rekii848 you're welcome 😊

  • @mezackx9292
    @mezackx9292 4 หลายเดือนก่อน

    I'm having trouble converting the models, most of the ones I use give me an error. Could you help me solve it?

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      It takes a long time to complete. May I know what GPU you use? Or what error you see when you click the host?

    • @mezackx9292
      @mezackx9292 4 หลายเดือนก่อน

      @@tongtongrizhi RX 580 8gB

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      @@mezackx9292 your GPU may take longer than mine. During conversion, try not do anything else with the PC and let it run for an hour

  • @tvanime6747
    @tvanime6747 4 หลายเดือนก่อน

    Traceback (most recent call last):
    File "${Workspace}
    epo\diffusion_scripts\sd_onnx_safe.py", line 1, in
    import os, sys, time, argparse, json, torch, onnx
    ModuleNotFoundError: No module named 'torch'

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      Could you let me know how you get this error?

    • @tvanime6747
      @tvanime6747 4 หลายเดือนก่อน

      ese error sale al ganerar la imagen @@tongtongrizhi

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      @@tvanime6747 If the same problem persists, try reinstall the program. If it still doesn't work, it could be your gpu not compatible with it

  • @tvanime6747
    @tvanime6747 4 หลายเดือนก่อน

    AMD RX580 8 VRAM funcionaria? o no bro

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      I don't have the exact hardware to test it. If you could test it for me, let me know if it works. That way it can help the AMD user community

    • @tvanime6747
      @tvanime6747 4 หลายเดือนก่อน

      No, bro, I'm telling you that I managed to get Zluda working on my RX580 with 8 VRAM in Stable Diffusion@@tongtongrizhi

    • @tongtongrizhi
      @tongtongrizhi  4 หลายเดือนก่อน

      @@tvanime6747 that's cool. How is the performance to generate an image?

    • @franknunez6546
      @franknunez6546 3 หลายเดือนก่อน

      bro, como hiciste para que funcionara zluda en tu tarjeta gráfica? (sin errores?)

    • @tongtongrizhi
      @tongtongrizhi  3 หลายเดือนก่อน

      @@franknunez6546 Watch this video if you need to use sd zluda
      th-cam.com/video/YazUwPNsdzE/w-d-xo.html

  • @IshanJaiswal26
    @IshanJaiswal26 2 หลายเดือนก่อน

    Host started...
    RTL be like: 22631.10.0
    Name - AMD Radeon RX 6600
    DeviceID - VideoController1
    AdapterRAM - 4293918720
    AdapterDACType - Internal DAC(400MHz)
    Monochrome - False
    DriverVersion - 31.0.24027.1012
    VideoProcessor - AMD Radeon Graphics Processor (0x73FF)
    VideoArchitecture - 5
    VideoMemoryType - 2
    Current device: onnx
    txt2img
    Traceback (most recent call last):
    File "${Workspace}
    epo\diffusion_scripts\sd_onnx_safe.py", line 16, in
    pipe = PipeDevice.GetPipe(opt.mdlpath, opt.mode, opt.nsfw)
    File "${Workspace}
    epo\diffusion_scripts\sd_xbackend.py", line 90, in GetPipe
    pipe = OnnxStableDiffusionPipeline.from_pretrained(Model, custom_pipeline=self.LPW_Path(), provider=self.prov, safety_checker=nsfw_pipe)
    File "${Workspace}
    epo\onnx.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 884, in from_pretrained
    cached_folder = cls.download(
    File "${Workspace}
    epo\onnx.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1208, in download
    Current device: onnx
    txt2img
    Traceback (most recent call last):
    File "${Workspace}
    epo\diffusion_scripts\sd_onnx_safe.py", line 16, in
    pipe = PipeDevice.GetPipe(opt.mdlpath, opt.mode, opt.nsfw)
    File "${Workspace}
    epo\diffusion_scripts\sd_xbackend.py", line 90, in GetPipe
    pipe = OnnxStableDiffusionPipeline.from_pretrained(Model, custom_pipeline=self.LPW_Path(), provider=self.prov, safety_checker=nsfw_pipe)
    File "${Workspace}
    epo\onnx.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 884, in from_pretrained
    cached_folder = cls.download(
    File "${Workspace}
    epo\onnx.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1208, in download
    config_file = hf_hub_download(
    File "${Workspace}
    epo\onnx.venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 111, in _inner_fn
    validate_repo_id(arg_value)
    File "${Workspace}
    epo\onnx.venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 159, in validate_repo_id
    raise HFValidationError(
    huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'C:/Users/ishan/Downloads/XUI3/models/onnx/multiplemixyy_highmixed10'. Use `repo_type` argument if needed.
    this error came

    • @tongtongrizhi
      @tongtongrizhi  2 หลายเดือนก่อน

      Have you convert the model before generate an image?