Keyboard Alchemist
Keyboard Alchemist
  • 13
  • 319 031
How to Install and Use Stable Diffusion in 2024 FREE & LOCAL. Ultimate Quick Start Guide.
#aiart, #stablediffusiontutorial, #generativeart
In this tutorial, I will show you how to install and run the latest Stable Diffusion models quickly with a FREE application called Stability Matrix. Available on Windows, Mac OS, and Linux. Stability Matrix is an open-source app that has all the best and most popular webUI packages under one roof. You can run A1111, Forge, Fooocus, and ComfyUI just to name a few. All packages are available for 1-click installation; no more manual install of Git and Python. Stability Matrix is free to install, it is local and private, it is loaded with quality-of-life features, and it is simple and intuitive to use. Whether you are brand new to Stable Diffusion or a seasoned pro, there is something for you in Stability Matrix.
For me, some of the really nice quality-of-life features of Stability Matrix are: (1) it will automatically check for UI updates and you can do a 1-click install for any UI package updates, (2) it will automatically check your Python dependencies for each UI package and update them as needed, (3) it embeds Git and Python dependencies such that they do not need to be globally installed for your UI packages to work, and therefore eliminating potential conflicts with other apps on your computer that might need to use a different version of Git or Python, (4) the Model / Checkpoint manager enables you to use your checkpoints, LoRAs, Textual Inversions, etc. across all UI packages thus saving you disk space and effort in trying to manage these files, and (5) the Model Browser allow you to import files directly from CivitAI and HuggingFace, and file them into the corresponding model folder depending on the model type.
Chapters:
00:00 - Intro
00:56 - Stability Matrix Overview
01:27 - Installing Stability Matrix
03:35 - Choose an Initial WebUI Package
05:56 - Choose an Initial Model or Checkpoint
08:51 - Launch Forge WebUI
10:52 - Menu option: Checkpoints
11:18 - How to import your existing Checkpoints and LoRAs
12:10 - How to import your existing Textual Inversions
12:50 - How to import your existing VAEs
14:00 - Installing A1111 and Enable Model Sharing (‘Symlink’)
15:25 - Comparing A1111 and Forge image generation speed
16:55 - Installing Fooocus UI
18:37 - Menu option: CivitAI Model Browser
19:49 - Importing a new Checkpoint from CivitAI Model Browser
21:56 - Menu option: Output Browser
23:14 - Launch Options
25:35 - Menu option: Inference
26:45 - Summary
Useful links:
Stability Matrix github:
github.com/LykosAI/StabilityMatrix
A1111 command line arguments:
github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings
***If you enjoy my videos, consider supporting me on Ko-fi***
ko-fi.com/keyboardalchemist
มุมมอง: 4 129

วีดีโอ

Regional Prompt Control with Tiled Diffusion/MultiDiffusion, Compose your Image Like A Boss! [A1111]
มุมมอง 4.4K6 หลายเดือนก่อน
#aiart, #stablediffusiontutorial, #generativeart This tutorial will show you how to use Regional Prompt Control within the Tiled Diffusion / Multi Diffusion extension to compose your images the way you want it! No, this isn't quite the same as Regional Prompter, i think it's actually easier to use. Plus, you can use LoRAs, Adetailer, and Control Net with this extension to give you even more con...
Tiled Diffusion with Tiled VAE / Multidiffusion Upscaler, the Ultimate Image Upscaling Guide [A1111]
มุมมอง 14K7 หลายเดือนก่อน
#aiart, #stablediffusiontutorial, #generativeart This tutorial will cover how to upscale your low resolution images to 4k resolution and above with the Tiled Diffusion with Tile VAE or Multidiffusion extension in A1111. We are going to walk through the workflow in the 1st part of the video, and then go into detail regarding how each setting affects your resulting image. As always, feel free to ...
How to Inpaint in Stable Diffusion A1111, A Detailed Guide with Inpainting Techniques to level up!
มุมมอง 19K10 หลายเดือนก่อน
#aiart, #stablediffusiontutorial, #generativeart This tutorial will cover how to inpaint in stable diffusion A1111 and some inpainting techniques, using tools like Photopea, img2img inpaint, inpaint sketch, and LORAs. Along the way, I will give you some tips and tricks for how to quickly and consistently obtain great inpainting results. As always, feel free to leave a comment down below and who...
STOP wasting time with Style LORAs! Use THIS instead! How to copy ANY style with IP Adapter [A1111]
มุมมอง 40K11 หลายเดือนก่อน
#aiart, #stablediffusiontutorial, #generativeart This tutorial will show you how to use IP Adapter to copy the Style of ANY image you want and how to apply that style to your own creation. It does the job of a LORA with just one image. We will also compare Control Net Reference Only versus IP Adapter and see how these two are different from each other. Chapters: 00:00 - Intro 00:26 - Topics ove...
ADetailer in A1111: How to auto inpaint and fix multiple faces, hands, and eyes with After Detailer.
มุมมอง 18K11 หลายเดือนก่อน
#aiart, #stablediffusiontutorial, #automatic1111 This tutorial walks through how to install and use the powerful After Detailer (ADetailer) extension in A1111 to automatically inpaint and fix faces, hands, eyes, and entire bodies without the manual work of drawing a mask and inpainting, so your images will always come out looking great! We will also go through an advance usage case where you ca...
How to do Outpainting without size limits in A1111 Img2Img with ControlNet [Generative Fill w SD]!
มุมมอง 20Kปีที่แล้ว
#aiart, #stablediffusiontutorial, #automatic1111 This tutorial walks you through how to Outpaint any image by expanding its borders and filling in details in the extra space outside of your original image, similar to the generative fill functionality of photoshop. We will also walk through how to unlock the 2048 x 2048 image size limits of Automatic 1111 by using the super-secret-ultimate "Limi...
How to change ANYTHING you want in an image with INPAINT ANYTHING+ControlNet A1111 [Tutorial Part2]
มุมมอง 38Kปีที่แล้ว
#aiart, #stablediffusiontutorial, #automatic1111 This is Part 2 of the Inpaint Anything tutorial. Previously, we went through how to change anything you want in an image with the powerful Inpaint Anything extension. In this tutorial, we will take a look at how to use the Control Net Inpaint model, and the Control Net and Cleaner features within the Inpaint Anything extension! Installation instr...
How to change ANYTHING you want in an image with INPAINT ANYTHING A1111 Extension [Tutorial Part1]
มุมมอง 111Kปีที่แล้ว
#aiart, #stablediffusiontutorial, #automatic1111 This tutorial walks you through how to change anything you want in an image with the powerful Inpaint Anything extension. We will install the extension, then show you a few methods to inpaint and change anything in your image. The results are AMAZING! Chapters: 00:00 Intro 01:12 Overview of Inpaint Anything Extension 01:43 Install Inpaint Anythin...
Do THIS to speed up SDXL image generation by 10x+ in A1111! Must see trick for smaller VRAM GPUs!
มุมมอง 19Kปีที่แล้ว
#SDXL, #automatic1111, #stablediffusiontutorial, Is your SDXL 1.0 crawling at a snails pace? Make this one change to speed up your SDXL image generation by 10x or more in Automatic 1111! I have a 8GB 3060Ti GPU and it was unbearably slow, until I put in this command line argument, and it sped up my image generation with the base model plus refiner by 10x to 14x for a single image generation. It...
How to Install and Use SDXL 1.0 Base and Refiner Models with Troubleshoot Tips!
มุมมอง 12Kปีที่แล้ว
SDXL 1.0 is finally released! This video will show you how to download, install, and use the SDXL 1.0 Base and Refiner models in Automatic 1111 Web UI. We will also compare images generated with SDXL 1.0 to images generated with fine-tuned version 1.5 models. Is this the beginning of the end for version 1.5 models? Let's find out! NOTE: check out my latest video on how to install Stable Diffusi...
How to use XYZ plots Script to Optimize Parameters and Get the Most Out of your Model!
มุมมอง 14Kปีที่แล้ว
This video tutorial walks you through how to use the XYZ plot Script in Automatic 1111 and provides a simple workflow that can help you find the optimal values of Sampling Steps, CFG scale, and Sampling Methods for the model you are using. I would recommend to do this when you first start working with a new model or checkpoint. Chapters: 00:00 Intro 00:10 Ep2 - How to get the most out of your m...
4 Easy Steps to Install Automatic1111 on your PC for FREE! Start creating fast!
มุมมอง 4.9Kปีที่แล้ว
This video tutorial walks you through in detail how to install and run Stable Diffusion Automatic1111 on your Windows PC with a Nvidia graphics card, and how to quickly start generating AI images. NOTE: This method is a bit outdated. Take a look at my other video for a better way to install Stable Diffusion: th-cam.com/video/85KR3GdS4wE/w-d-xo.html Chapters: 00:00 Intro 01:14 Ep1 - Install Stab...

ความคิดเห็น

  • @avalerionbass
    @avalerionbass 3 วันที่ผ่านมา

    I use this utility ALL the time, here's a few tips for the highest quality you can achieve in the end results if you wanna be a try hard like me (depending on your GPU strength, you'll need to adjust. I have a 4090): TL:DR: I know this isn't practical for most people, but if you're reading this and like me, you want the BEST possible quality in the end result and you got the gear, this is how. 1. 128x128 Latent tile width/height settings with a 64 latent tile overlap (the suggestion of 8 in the video is probably more performant) and use the other recommended settings. 2. Set an xyz script with denoising and in the parameters put 0.1-0.5(+0.05). This will generate 9 different images and takes a while, even on my 4090 it's around 8-10 minutes. You can probably just do 0.1-0.5(+0.1) and be fine with 5 images for the rest. 3. The reason you do the previous step is because each level of denoise will produce a slightly different picture. Some with better eyes, some with better clothes, etc. 4. Take the lowest denoise photo and put it into a photo editor of your choice as a base(I use Photopea). 5. Import the other photos and set them as a raster mask layer and hide all. 6. Use a brush with black and white colors set and a lowest hardness setting to start blending in different layers. This will allow you to take the best from every single gen and combine them into a completely flawless end product. 7. Afterwards you can adjust color balancing, hue, contrast, and brightness settings to clean up the picture. Many pieces tend to gen with a slight red or green hue that needs to be fixed. 8. If you do this and it needs a lot of editing, it can introduce what's called color banding. To fix this, export your final result and import it into img2img. Turn off TiledDiffusion/VAE, set denoise to 0, turn off xyz script, set scale to 1, and re-gen. This will smooth out any artifacts introduced by editing but maintain the original picture at original size. Also, if you DO have high end gear, turn off Fast Encoder/Decoder. If you zoom in on a final product, it appears as though the entire gen was made with tiny little dots. If you need the performance advantage of the coders, you can fix this issue afterwards with step number 8 and it will remove them.

  • @skistenl6566
    @skistenl6566 12 วันที่ผ่านมา

    When I enable region prompt control, I always get this error "TypeError: expected Tensor as element 0 in argument 0, but got DictWithShape". It doesn't generate image, but when I uncheck it, it works - ofc without the region control.... Any idea how to fix that? Thank you

  • @skistenl6566
    @skistenl6566 12 วันที่ผ่านมา

    I wished I had watched your tutorial before going down the rabbit hole 😅 thank you so much for the great tutorial. Perhaps it would make your videos a bit longer, quick explanations for the terms like VAE, textual inversions, etc. could make this seem more approachable imho

  • @FullStackFalcon
    @FullStackFalcon 13 วันที่ผ่านมา

    Great video. Whats your opinion about ComfyUI, I am curious why you never make any videos on it.

  • @user-Dilikelei
    @user-Dilikelei 20 วันที่ผ่านมา

    oh no,i don't konw why my computer can't download the inpaint model id,i followed your video properly😢,i need help😢

    • @user-Dilikelei
      @user-Dilikelei 20 วันที่ผ่านมา

      my internet is ok,but the program doesn't begin to download the inpaint model😢

  • @michaelredman8179
    @michaelredman8179 21 วันที่ผ่านมา

    What if I’m getting a “package modification failed”? Please help.

  • @littledovecitydust
    @littledovecitydust 22 วันที่ผ่านมา

    if you have multiple areas that need to be fixed, can you use latent upscaling and then downscale the image so it's more workable, and then use latent upscaling again?

  • @littledovecitydust
    @littledovecitydust 22 วันที่ผ่านมา

    is this channel abandoned?

  • @Final-Ts
    @Final-Ts 26 วันที่ผ่านมา

    Thanks for the video! Do you know if there is a way to train your own models and use them with this program? I mean like uploading 100 pics of a particular person's face to get a model. Can you do that with this?

  • @ranga5823
    @ranga5823 27 วันที่ผ่านมา

    Thank you very much sir

  • @conte.vinicius
    @conte.vinicius หลายเดือนก่อน

    Such an amazing video! I just started with SD 2 weeks ago, and this video is what made my images reach a much higher quality level. Thanks!

    • @KeyboardAlchemist
      @KeyboardAlchemist หลายเดือนก่อน

      I'm glad it was helpful! Thank you for your kind words!

  • @nataliamacias73
    @nataliamacias73 หลายเดือนก่อน

    Excellent guide, you made it so easy to understand! Thanks!

    • @KeyboardAlchemist
      @KeyboardAlchemist หลายเดือนก่อน

      You're welcome! I'm glad you liked the video!

  • @PejmanEbrahimikingofseps
    @PejmanEbrahimikingofseps หลายเดือนก่อน

    Sir, I have a important request. How we can re-design from a low q. To high. No upscale way. I want AI can known the objects and re-design it on high quality, I would to big thanks

  • @venomvenom6971
    @venomvenom6971 หลายเดือนก่อน

    man i searched everywhere to find out how to get the brush color toggle for inpainting and only this video explained how to manually get the extension installed by going into the available tab. it says that the canvas zoom is "built-in" in my A1111 but it doesnt update and it didnt came with the brush color tool. this video really helped, thanks so much bro

  • @PejmanEbrahimikingofseps
    @PejmanEbrahimikingofseps หลายเดือนก่อน

    Thank you ❤, it's work perfect

  • @DarienLingstuyl
    @DarienLingstuyl หลายเดือนก่อน

    If i want to add a Lora like "add more details" and "Sharpness tweaker" will it work on this tile diffusion method?

  • @AlexG.O.A.T.
    @AlexG.O.A.T. หลายเดือนก่อน

    Thank you for this tutorial it was very useful because you showed things step by step, and didnt skip anything. Only problem I have, is that I cant find anything else than the inpainting model I put in ControlNet in the models folder, is that normal?

  • @kicapanmanis1060
    @kicapanmanis1060 หลายเดือนก่อน

    Tried going to the hugging face page but the file is gone. Or at least if it's there it's very different than the one you showed.

  • @SupremacyGamesYT
    @SupremacyGamesYT หลายเดือนก่อน

    Trying to do methord 2 but I get botched results. Think this needs an updated guide, that or my setup requires different parameters

  • @AyuK-jm1qo
    @AyuK-jm1qo หลายเดือนก่อน

    Did things change or do I need to do something else? I have downloaded everything and put it into the folder but the pre processor doesnt have the one I downloaded. It only has auto, clip_h, pulid, face_id_plus, face_id, clip_sdxl_plus_vith and clip_g

  • @TheSchwarzKater
    @TheSchwarzKater 2 หลายเดือนก่อน

    13:20 this advanced part was very helpful. I always struggled with preventing artifacts.

  • @rexidexi7681
    @rexidexi7681 2 หลายเดือนก่อน

    When doing this, Inpaint just stretches the imagine. Any idea what I'm doing wrong?

  • @TheAncientDemon
    @TheAncientDemon 2 หลายเดือนก่อน

    This is great, I tried it myself and of course screwed something up. When I find a background generation I like, I then try to add chracters to the background generation, while keeping the same seed number, but the background changes even when keep the same seed number. Not sure how to proceed without relying on rng. Any thoughts?

  • @theaussiegeek
    @theaussiegeek 2 หลายเดือนก่อน

    is there a video on installing the segment-anything model ?

  • @theaussiegeek
    @theaussiegeek 2 หลายเดือนก่อน

    There is no "SEARCH" option for Extensions.

  • @CrankAlexx
    @CrankAlexx 2 หลายเดือนก่อน

    It doesn't work with pony or XL modelssssssss, can you use your alchemy to make it work?

  • @望清苑月明
    @望清苑月明 2 หลายเดือนก่อน

    太棒了!我昨天遇到了这个问题,无限延长的手臂,感谢您的视频!

    • @KeyboardAlchemist
      @KeyboardAlchemist 2 หลายเดือนก่อน

      Glad I can be of help! Thanks for watching.

  • @АлександрСергеевич-с4х
    @АлександрСергеевич-с4х 2 หลายเดือนก่อน

    Really ty again !

  • @SupremacyGamesYT
    @SupremacyGamesYT 2 หลายเดือนก่อน

    Does anyone know why im getting little bumby circles artifacts on upscaled images when using? using method 1. EDIT: Its something with FAST DECODER, with it off I get slower gen but I must

  • @windowsxptutos6391
    @windowsxptutos6391 2 หลายเดือนก่อน

    You say "I like to keep my extensions up to date all time", so just add : for /D %%I in (extensions\*) do git -C "%%I" pull at the start webui-user.bat. That will update all start, no need to restart :)

  • @fmuldfhalo8673
    @fmuldfhalo8673 2 หลายเดือนก่อน

    HI how do you generate the heatmap?

  • @renwar_G
    @renwar_G 2 หลายเดือนก่อน

    This video is a Gem, well done Sir 🔥

    • @KeyboardAlchemist
      @KeyboardAlchemist 2 หลายเดือนก่อน

      Thank you, I'm glad you liked the video!

  • @hanxia2986
    @hanxia2986 3 หลายเดือนก่อน

    FYI, After Detailer now is called ADetailer if you cannot find it in the extension list

  • @SupremacyGamesYT
    @SupremacyGamesYT 3 หลายเดือนก่อน

    so nowadays, is it normal for a 25-30sec generation SDXL image on a 3080 10gb? I was generating @ 720x1280 cheers.

  • @zwjkeen
    @zwjkeen 3 หลายเดือนก่อน

    thanks teach

  • @cyberspider78910
    @cyberspider78910 3 หลายเดือนก่อน

    this video is gold standard for anyone starting with automatic1111

  • @How-xw1ws
    @How-xw1ws 3 หลายเดือนก่อน

    It worked at first then all of a sudden it just stopped doing anything for the eyes. I can detect the eyes but it's like it doesn't even change them after running the adetailer. The eyes are the same afterwards, anyone run into this issue?

  • @fishpickles1377
    @fishpickles1377 3 หลายเดือนก่อน

    THis is insanely helpful

    • @KeyboardAlchemist
      @KeyboardAlchemist 3 หลายเดือนก่อน

      I'm glad it was helpful. Thank you for watching!

  • @fishpickles1377
    @fishpickles1377 3 หลายเดือนก่อน

    These videos are super helpful, thanks!

  • @fishpickles1377
    @fishpickles1377 3 หลายเดือนก่อน

    Super great tips in this video, thanks!

  • @dulay28
    @dulay28 3 หลายเดือนก่อน

    Can it do reference image of the clothes that I want?

  • @Vanced2Dua
    @Vanced2Dua 3 หลายเดือนก่อน

    Pertanyaan saya adalah bagaimana cara menginstal eksistensi di matrix Terimakasih saya sudah subscribe you channel

  • @eugenekhristo7252
    @eugenekhristo7252 3 หลายเดือนก่อน

    Thanks a lot! Speed generation on Forge is just mindblowing, compared to A1111. At least 4 times faster on SDXL on my 3070 8GB. I just have one issue. When I put upscalers in ESRGAN folder I have them doubled in my UI hires. fix tab. Does somebody have the same?

  • @eugenekhristo7252
    @eugenekhristo7252 3 หลายเดือนก่อน

    Why is it taking hours for processing one generation? In my console it writes "Enable model cpu offload", I guess because of it one image is processed for almost 3 min....

    • @KeyboardAlchemist
      @KeyboardAlchemist 3 หลายเดือนก่อน

      You don't want your CPU to do the work. You have to check to make sure that your GPU is doing the image generation. If it's not the GPU, then it will be very slow.

  • @CEAG23
    @CEAG23 3 หลายเดือนก่อน

    gracias!!!!!

  • @farcfoxguerrilha
    @farcfoxguerrilha 4 หลายเดือนก่อน

    Very good! Thanks!

    • @KeyboardAlchemist
      @KeyboardAlchemist 4 หลายเดือนก่อน

      Glad you liked it! Thanks for watching!

  • @GES1985
    @GES1985 4 หลายเดือนก่อน

    Is there a way to take an item or jewelry from one picture and put into another? Or is that just something to do photoshop

    • @KeyboardAlchemist
      @KeyboardAlchemist 4 หลายเดือนก่อน

      I made a video previously about this, check it out here: th-cam.com/video/akzu3R7lDZ4/w-d-xo.html. I hope this helps.

  • @ISIMUSI
    @ISIMUSI 4 หลายเดือนก่อน

    Very interesting video, I didn't know this tool. Since you are open to new requests, I would like to propose one that interests me.. I am becoming used to using Automatic1111 on the renderings I produce to add some details here and there.. being high-res images I use the Tile/blur(controlnet) + Tiled combo Diffusion&TiledVae to work directly on it and do further upscaling. When there are people in the images, they very often come out wrong. Is there any way to use the Inpaint module directly in high resolutions without making crops?

    • @KeyboardAlchemist
      @KeyboardAlchemist 4 หลายเดือนก่อน

      I have a video on Tiled Diffusion / Tiled VAE, check it out here: th-cam.com/video/44waH3sDYOM/w-d-xo.html. Not sure if this video answers your question or not.

  • @felixmontanez4090
    @felixmontanez4090 4 หลายเดือนก่อน

    what model did u use to make the base image?

    • @KeyboardAlchemist
      @KeyboardAlchemist 4 หลายเดือนก่อน

      The model is called 'majicMIX realistic', you can find it on CivitAI.

    • @felixmontanez4090
      @felixmontanez4090 4 หลายเดือนก่อน

      @@KeyboardAlchemist what prompt did u use

    • @KeyboardAlchemist
      @KeyboardAlchemist 3 หลายเดือนก่อน

      @@felixmontanez4090 17:10 of the video has all the prompt info that you will need. Cheers!