Update: Check this video How to Install Forge UI & FLUX Models: The Ultimate Guide th-cam.com/video/BFSDsMz_uE0/w-d-xo.html Here are some useful resources for Stable Diffusion: Download Stable Diffusion Webui Forge from: github.com/lllyasviel/stable-diffusion-webui-forge Download Juggernaut XL version 9 from: civitai.com/models/133005/juggernaut-xl?modelVersionId=348913 More info on FreeU: github.com/ChenyangSi/FreeU Download more ControlNet SDXL models huggingface.co/lllyasviel/sd_control_collection/tree/main Extensions used github.com/ahgsql/StyleSelectorXL and github.com/thomasasfk/sd-webui-aspect-ratio-helper If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/ or Pixaroma Discord Server discord.gg/a8ZM7Qtsqq
I have been looking for a tutorial like this for months. You have a real talent for this tutorial style and I HIGHLY encourage you to keep making these videos. Information is packed and logically flowing from one point to the next. Subscribed!
HOLY COW! I've been using A1111 (and now Forge) for a year, so by now I know most of these "hacks", but I wish a video so clear and so thorough existed when I was starting my journey. I even picked up a new nugget here and there. Bravo! Subscribed. And Saved.
Does it crash more or less often? I've been using A1111 for a while now, but it feels like it's been crashing more and more. Especially with SDXL models.
Since i switched to forge it didn't crash at all, only when i used control net ceashed if i didn't use image size divisible with 64 @@edouarddubois9402
The width and height of the image, sometimes i got that error when the size was not divisible with 64, but mostly when i used some extensions@@edouarddubois9402
Watched 10 videos about SD Forge install, but your is the best, Quick to the point, super useful tips for beginners, you even say what to do if it crashes.
It should be noted, for those who stumbled upon this like I did without knowing any better, that this method only works for nVidia graphics cards. WebUI uses CUDA, which is a proprietary API specifically for nVidia...meaning if you don't have their drivers, you can't natively run Web UI. Luckily there are forks that exist that do work for AMD Radeon cards, but you'll have to jump through a few more hoops than what is shown here in order to install, and it probably won't run quite as fast as it does on nVidia cards.
Thanks for your video! I installed Forge yesterday (no stranger to A1111 here), but thought I'd check out a video or 2. For preferred defaults, I've been editing ui-config.json. Didn't realise there was a more straightforward method via settings haha! Dark mode is so much easier on the eye. I set it at browser level so that all pages appear dark, then display settings > high contrast in Windows (7, 8, 10, 11) will give dark mode OS-wide. Thanks for the heads up on SDXL styles and how its extra prompting works. 2 more extensions I use that might help is CivitAI Browser+ which integrates CivitAI into A1111/Forge, and ADetailer (After Detailer) which is an automatic in-paint utility that tidies up facial features and I find this better than GFPGAN and CodeFormer. Your 7-second image generation near the start of your video took 17 seconds for me with the same settings, coming from an RTX 3070 here. You covering file naming was very helpful too as I wanted to add the denoising value to the file names, and use a suffix instead of file numbers :-)
I didnt play with that function yet, it always seems to be complicated to do trainings, I tried also on A1111 but I dont get always good results, needs good settings, good images, captions, too many things involved it seems. And now I saw an anoucement that forge is not going to be updated anymore, like is used more for tests or something.
great info. Also a quick tip, below the image there is a button to upscale using hires fix, just a quicker way to do it. 09:25 I think that option is new with Forge, wasn't in A1111
Very helpful, i will be watching all the videos in this playlist, thanks! BTW what do you use for your voice It's great.( If it's not a trade secret that is)
First of all, I must say thank you. I started with your videos with the latest one, about Flux, and I am stuck here. Forge UI is fantastic! My only question is if I can find log files about prompts? It would be great to keep them.
Well each prompt and settings are saved in the png you generated, so if you drag the png you like into the png info tab you can see the prompt and setting. For more complex probably you need a script or an extension, on a quick search maybe an extension like this could do something similar, didn't test but maybe gives you some ideas github.com/ThereforeGames/unprompted
@@CsokaErno I use XnView MP as my default image viewer. It has meta-info tab on the right, no need to import images anywhere, you can just copy, alt-tab and paste prompt+properties into your browser. Besides that, it's a really handy piece of software compared to vanilla windows image viewers.
i wonder if any of the stable diffusion UI makers (forge, automatic, comfyui etc) has considered a method for capturing 'recommended model settings' like you point out at 3:29 - as going out and hunting down a model's recommended settings is a work slow-down; perhaps be able to configure a 'model or ksampler template' that can be a quick preset based on the model.. would be kinda cool to have the option to be able to on checkpoint load to trigger the preset (but again should be an optional thing, not everyone would want that in all cases). if this already exists someone let us know
There is a preset saving extension so you can just save settings and give it a name similar to the model you use to know for what it is, but many extensions have bugs since with updates
great video! thanks very much. quick question, do you use tts for narration? If so, it's incredible, may I ask which one? I've been trying to find something decent for my videos. Cheers :)
Yeah I am using a voice from voiceair.ai , my voice is ok but my spoken English is not so good and the accent is too strong, I am better with writing, that why that voice can produce a clear English and anyone can understand me, and sounds good for an AI voice.
the real tutorial we need is how to run ForgeUI using colab, there are currently no colab notebooks provided for it.... some of us have colab pro subscriptions and want to run this on the big boy GPUs
I haven't used colab for months because I upgraded my PC, i just share tutorials on how I use it and the knowledge I got so far, sorry i can't help more
Hello, really great tutorial. I have a question I want to use the 4xVALAR upscaler but have no idea where to put it. Could you please answer if you have an idea in what folder exactly it should be.
Go to your webui\models folder and there create a folder named ESRGAN, so you will have webui\models\ESRGAN path, and in that ESRGAN you put that upscaler model. That worked for me, hope it works for you.
From what I know it search for Nvidia driver so no it doesnt work, many fail to load on AMD, i saw one commenting somewhere that got it to run on amd one UI called SD.Next by Vladmandic
Thankyou for this tutorial! ❤Do I need the automatic 1111 stable diffusion installed to be able to install forge? I have the oldest version of automatic 1111 installed and I hvnt used it or upgraded it as I cudnt keep up with the every new update and other troubleshooting issues as I hv zero knowledge of programming language 😢
I'm new to this Stable Diffusion GUI. Experienced people can you please answer is this Forge WebUI is better than Fooocus MRE? If Yes, then in what parameters is it better? Thanks!
This is a great tutorial, but for me using it on Ubuntu makes feel a bit sad, cause most of extensions simply doesn't work, or doesn't want to install. Maybe because of GPU (rtx2060,6gb), but when I had Windows system on same machine, it had more extensions preinstalled and used. Like I don't have Free U and Control Net SDXL. As well as I remember, it worked more better than now. Did they made some new updates, which made it work worse? (Last time used 3 months ago)
They stopped updated for the official version, there are some versions that still around but not sure how much update they get., you can try the last stable version or change to dev2 th-cam.com/video/RZJJ_ZrHOc0/w-d-xo.html. I have on older pc same gpu, but i wasnt able to run with control net, it crashed. ComfuUI works ok, but didnt try yet complex stuff
6:21 Wow, didn't knew about it, I through the only way to change it is to edit it manually in some file I don't remember now. Still, I would like it to have different defaults for each checkpoint, is it possible?
So, in the arguments section where you put the dark theme I can add: --pin-shared-memory --cuda-malloc --cuda-stream For optimization, right? Thanks for the video!
Yes, i tried all those as forge suggested but didn't make it faster on my rtx4090 but slower, maybe it does better for you but for me with bo argument was faster
is in the webui-user.bat look for set COMMANDLINE_ARGS= and there after equal you add it, like I added the dark theme you add more, set COMMANDLINE_ARGS=--theme dark --cuda-stream
Flux nodels can do that most of the time without fixing, so if your video can handle flux you should try. I have video on flux for both forge and comfyui, just forge is still work on progress, works with flux but other things still don't work yet, they are changing the interface and need time to fix all the things
@@pixaroma Yes, you are right, but in this case I talked about only SD. Flux can handle hands and eyes pretty well but when two hand are close or touch each other, it makes fault. Anyway, I suppose it will be fixed very soon like everything else in AI realm :)
@@CsokaErno with sdxl I got ok results if i used control net, there are some more nodes that can be used, I will see if I get enough info to get an episode about that in the future
@@pixaromathank you, hope that some one can answer. I don't have the requested performance machine to do local installation, so that will give me a great help. why I'm asking? it's just for the seamless pattern setting that exists in the models presented. this capability isn't offered right now in fooocus witch is easily accessible with colab.
Either your video card is not good enogh or forge dont recognize it, I am a designer not a coder, but you can try this in webui-user.bat add the following arguments to see if it works, it need at least 6gb of vram and prefers nvidia cards but try it anyway: set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half
I see someone already posted taht on the bugs area, you can watch that to see if it gets any response if nothing else works, github.com/lllyasviel/stable-diffusion-webui-forge/issues
I tested on 6gb and worked, only control net give me some crashed, but other things worked for me, faster then automatic that took ages. It worth a try, and if you dont like it how it work you can just delete the folder with all files. But i believe you can use most of the functions, if automatic1111 works this should work even faster
I runned forge on 6gb rtx 2060 so it should work, other solution will be comfy ui since forge will not be updated anymore, and once they update automatic 1111 that will be also a good solution
not sure if all those works, but did you installed them from extension? Go to extension tab, click on available, click on Load From button, that will load all, search for an extension, for example tried ratio helper in search and installed just well when i clicked install, and restarted forge.
I am not sure, can you join my Facebook group and you can you show me maybe some screenshots or post there to take a look, do you get any errors or what it looks like
You can read more about here, i didn't play with them in forge only with canny control net mostly, also keep in mind the version you are using there are different forks of forge now, the main one is used for beta testing and many things might not work! github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178
I have NVIDIA graphic card but i get this error when run forge: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. how can I solve this?
so in the last month there has been a lot of updates to the forge ui, today I am working on an update video with what is new. Go to this discussion page to see what they changed and in comments some people had the same problem like yours and it seems has something to do with forge, on the bottom of the page you can see comments and click to load more comments, and with ctrl+f you can search for "torch is" to find comments that have those words github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981 also check this page in the comments how some used different settings in arguments - github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1742
thanks for the controlnet section I got stumped on where to put the models. Edit: can anybody help me with LORA models, I paste them in the LORA folder but FORGE doesnt seem to detect them
i just tested now with a file and seems to be recognized, webui\models\Lora so the folder is Lora, and after you past it there you go to lora in the interface and refresh the page or just restart the stable diffusion so it can see it
Little question: did anyone have a problem with LyCORIS models on Forge? I'm using Forge through Stability Matrix, and no matter how I load -- from my computer or Matrix's models loader -- it just doesn't show up in Lora's tab. And when I load it in Lora's folder, it doesn't work correctly.
Should be next to run.bat and environment.bat a file called update.bat i have it there since installation, your should have it to. Just careful with updates to have a good stable version, check this video th-cam.com/video/RZJJ_ZrHOc0/w-d-xo.htmlsi=rF-9wCmzResJiW3L
My built-in controlnet's IP-Adapter is missing its models, and thus, doesn't work. Any ideas? I wanted to install them manually, but the library is different, and so are the files.
Forge still has some problems with control net, check this discussion maybe it helps github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178#discussioncomment-8572388
@@pixaroma IT has it now; but there are bugs that cause Loras not to generate and it just displays "Error". It's amazing when it works, but annoying how often ForgeUI just has problems. Eventually I just learned how to use Comfy UI and enjoyed having something that actually just works.
@@sociallyresponsiblexenomor7608 yeah that why i switched to comfy also and created that comfyui series, I learn new things each day and got used with nodes
What if you use both 1.5 and XL checkpoints? Do you have to keep manually changing the FreeU settings everytime? Also with Hypertile, should the tile size be half of the generated dimensions longest side? what if I am usijng x2 hires fix, should it be half the size of the hires fix? What does the hypertile swap size and depth settings do? and is there a way like in SDNext to have hypertile set to automatic mode?
I don't use those settings too be much to give you more details, i know that for control net you have to manually keep changing models, 1.5 is different from sdxl so it needs different models and settings
@@pixaroma yes that's to be expected for control net as 1.5 and xl use different model. That's easily done. What's not easily done is having to remember the precise settings for freeU for 1.5 and xl. Can't believe there is no preset option to load these on the fly. I also am disappointed that there is no auto mode for hypertile tile size. Damn, all these different UIs should come together and make one ultimate ui
How do I add the "ip-adapter_face_id_plus" preprocessor for IP-Adapter? It's not in Forge. "ip-adapter_face_id_plus" working better then "InsightFace+CLIP-H (IPAdapter)"
Maybe look here gist.github.com/ShMcK/d14d90abea1437fdc9cfe8ecda864b06 aws.amazon.com/blogs/machine-learning/use-stable-diffusion-xl-with-amazon-sagemaker-jumpstart-in-amazon-sagemaker-studio/ as I don't use aws I can't hellp
please i got this error when i started run.bat C:\Users\moor\stable-diffusion-webui-forge>python launch.py Traceback (most recent call last): File "C:\Users\moor\stable-diffusion-webui-forge\launch.py", line 1, in from modules import launch_utils ImportError: cannot import name 'launch_utils' from 'modules' (C:\Users\moor\AppData\Roaming\Python\Python310\site-packages\modules.py)
Not sure what is with error, but what you can try is to create a folde on another drive and try again fresh maybe can make it run, I don't know coding but looks like can not important a file, maybe something didn't download or is a bug, that why I say to try a fresh install in another folder
Just updated to latest Forge version , the one that can work with Flux, but using only Sdxl on my 8gb card : every time I do inpainting or img2img the result has lower saturation than the original, it's me or what? Assigning a VAE do not solve 😢
There are a lot of bugs on new version so it will take a while for all to get fixed, this has a similar problem github.com/lllyasviel/stable-diffusion-webui-forge/issues/1189 and if you look at the list of open issues are like 600 github.com/lllyasviel/stable-diffusion-webui-forge/issues
I use forges deforum tab to create animations. I would like to know how to create the animations within a boundary. I projection map so I would like to know how to keep the animations within the map of my house. Would you know how to accomplish this? I have a png map file that I created but unsure what to use it with. TIA
Sorry, I didn't play yet with deforum, so i can't help there yet, I like to create HQ images and the video and animation isn't quite there yet, i am waiting for an improvement before i jump in to it
@@ZeroCool22 i think it was some problems with ad detailer, and some extensions. For control net for example for me only works if image size width and height is divisible by 64. But just try for things that works and work faster and use a1111 or other for things that doesn't work :)
Forge has some basic prompt from image but is not so accurate, in img2img tab under generate it has a paperclip icon, first time will download a model but after that should work faster, and is giving a basic description of the image you uploaded in the img2img.
Mostly in optimization on how it handles memory, so it generates images faster then a1111 and have some extra things, but it stopped being updated officially so i switched now to comfy UI
hi my installed Sd forge doesn't have the update.bat file. Is there anyway to update SD forge without the file? Maybe by adding arguments to look for an update?
Usually those from automatic are also on forge, but not sure if all works, you can try and test it, i dont usually use outpaint because it doesn't always do a good job, for that i prefer Photoshop generative fill
From your video with the purse, and the drinks can in the desert, I understood that Inpaint Background took account of, say the lighting, in the masked-out subject when creating a completely different background, as compared with a simple remove/replace background ignoring the masked area. Have I misunderstood? Does Photoshop Generative Fill allow a completely different background prompt, or only an extension of the existing image within a larger canvas?
@@johnclapperton8211 when you do with inpaint it look around to be able to paint better, but is not always perfect. For photoshop when you expand with crop it does automatically, but after you can select that part that was generated and give it with the prompt what you want in there
That is strange, you can try report the issue on their page github.com/lllyasviel/stable-diffusion-webui-forge/issues maybe is a bug and so how it used the video card to much and make it crash, but that should not happen. I only tested on 2 computers, and i didnt had a problem on rtx2060 or rtx4090. Try an update in a few days maybe is get fixed. I wish I can help you more, but I am a designers not a programmer, I just share how to use new technology to get images with Ai, when is come to actual code, and bugs if it didnt happened to me not sure how to tell you a fix :(
@pixaroma Owww... I understand, I haven't seen the other videos yet, i dint have time, I came here to see the correct folder to paste the file. Do you have any tips for me? I have a Ryzen 5600g with 32GB RAM and a 3060 with 12GB. What is the best SD for me to install here?
I never did it but someone commented with this on reddit Use command prompt in SD directory and type git revert or git reset --hard . You can find the previous version hashes using git log, or there is a list somewhere on the github. So for forge probably you have to go to forge and then to web ui folder, then in the address bar type cmd and press enter, then you can see all those commit with a string of numbers, then I am not sure the next part either you use git revert and put that comit number or something like that
Update: Check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
th-cam.com/video/BFSDsMz_uE0/w-d-xo.html
Here are some useful resources for Stable Diffusion:
Download Stable Diffusion Webui Forge from: github.com/lllyasviel/stable-diffusion-webui-forge
Download Juggernaut XL version 9 from: civitai.com/models/133005/juggernaut-xl?modelVersionId=348913
More info on FreeU:
github.com/ChenyangSi/FreeU
Download more ControlNet SDXL models huggingface.co/lllyasviel/sd_control_collection/tree/main
Extensions used github.com/ahgsql/StyleSelectorXL and github.com/thomasasfk/sd-webui-aspect-ratio-helper
If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/
or Pixaroma Discord Server discord.gg/a8ZM7Qtsqq
Will this work on Mac M1?
Sorry I don't have a mac to test it, but didn't see something that says supports Mac, so probably not yet
@@pixaroma OK tnx
You should probably put these links in the video description. It's way more likely to be noticed.
when i open run.bat it says couldnt install pip, can you please help?
I have been looking for a tutorial like this for months. You have a real talent for this tutorial style and I HIGHLY encourage you to keep making these videos. Information is packed and logically flowing from one point to the next. Subscribed!
Thank you for showing more than just the installation like customizing the ui and settings :-)
Best video on SD Forge on youtube. great contribution to the community!
HOLY COW! I've been using A1111 (and now Forge) for a year, so by now I know most of these "hacks", but I wish a video so clear and so thorough existed when I was starting my journey. I even picked up a new nugget here and there. Bravo! Subscribed. And Saved.
Thank you ☺️
This was exactly what I needed to get started. This is perfection and I can't thank you enough for your work. Bless you man.
We need more guides like these
I like A1111 but I have found more performance in forge, I must say that you are very good at explaining, excellent video!
Thank you, yeah for me forge is faster and didn't crash like a1111 :)
Does it crash more or less often? I've been using A1111 for a while now, but it feels like it's been crashing more and more. Especially with SDXL models.
Since i switched to forge it didn't crash at all, only when i used control net ceashed if i didn't use image size divisible with 64 @@edouarddubois9402
@@pixaroma When you say image size, you mean the actual resolution?
The width and height of the image, sometimes i got that error when the size was not divisible with 64, but mostly when i used some extensions@@edouarddubois9402
Watched 10 videos about SD Forge install, but your is the best, Quick to the point, super useful tips for beginners, you even say what to do if it crashes.
Thank you, glad it helped ☺️
Do you know minimum hardware requirements?
Nvidia video card, 6gb of vram tested, possible work with 4gb of vram but not sure
It should be noted, for those who stumbled upon this like I did without knowing any better, that this method only works for nVidia graphics cards. WebUI uses CUDA, which is a proprietary API specifically for nVidia...meaning if you don't have their drivers, you can't natively run Web UI.
Luckily there are forks that exist that do work for AMD Radeon cards, but you'll have to jump through a few more hoops than what is shown here in order to install, and it probably won't run quite as fast as it does on nVidia cards.
Finally I found the best tutorial channel on YT. Thanks a lot!!!
Great walkthrough! Just switched over from EasyDiffusion and Forge is a massive improvement in regard to generation speed.
Duuuude!! So detailed, thank you!! Not hush hush, like well done on your style of explaining. Amazing
most helpful video on AI to ever exist give this guy an award please, very helpful saved me hours thankyou :)
Thanks for your video! I installed Forge yesterday (no stranger to A1111 here), but thought I'd check out a video or 2. For preferred defaults, I've been editing ui-config.json. Didn't realise there was a more straightforward method via settings haha! Dark mode is so much easier on the eye. I set it at browser level so that all pages appear dark, then display settings > high contrast in Windows (7, 8, 10, 11) will give dark mode OS-wide. Thanks for the heads up on SDXL styles and how its extra prompting works. 2 more extensions I use that might help is CivitAI Browser+ which integrates CivitAI into A1111/Forge, and ADetailer (After Detailer) which is an automatic in-paint utility that tidies up facial features and I find this better than GFPGAN and CodeFormer. Your 7-second image generation near the start of your video took 17 seconds for me with the same settings, coming from an RTX 3070 here. You covering file naming was very helpful too as I wanted to add the denoising value to the file names, and use a suffix instead of file numbers :-)
glad it helped :) I use rtx4090 so that why the generation was faster. Referring to styles check the latest videos i have one with 260 art styles :)
@@pixaroma Thanks. I have found what you meant + subscribed
Thank you very much! Loved how clear you brought everything across! Definitely am gonna hang around here :)
FLUX has been awesome!!!!!
Wow Thanks for putting in the time to make this!! Is there any guide on using the Train tab (embedding, hypernetwork, train)?
I didnt play with that function yet, it always seems to be complicated to do trainings, I tried also on A1111 but I dont get always good results, needs good settings, good images, captions, too many things involved it seems. And now I saw an anoucement that forge is not going to be updated anymore, like is used more for tests or something.
holy shit!!! didnt know where the controlnet files where suppose to go was tryin to use on forge. vid help alot thanks!!!!
great info. Also a quick tip, below the image there is a button to upscale using hires fix, just a quicker way to do it. 09:25 I think that option is new with Forge, wasn't in A1111
Thank you, yeah I didnt notice that :) good tip
Very helpful, i will be watching all the videos in this playlist, thanks! BTW what do you use for your voice It's great.( If it's not a trade secret that is)
The voice is from VoiceAir ai, they have it from eleven labs from what i know , i got a lifetime deal a while back
Incredible and informative! Well done. Thank you so much for the video.
Very interesting. Good to know there is also another interface
Thanks for this tutorial!! I noticed you're able to generate very quickly. Can I ask for your PC specs? Thank you!
I speed up video sometimes, it takes like 5 sec for a 1024px image. I have rtx4090 with 24gb of vram, more vram the faster is the generation
This was VERY helpful, thanks"
First of all, I must say thank you. I started with your videos with the latest one, about Flux, and I am stuck here. Forge UI is fantastic! My only question is if I can find log files about prompts? It would be great to keep them.
Well each prompt and settings are saved in the png you generated, so if you drag the png you like into the png info tab you can see the prompt and setting. For more complex probably you need a script or an extension, on a quick search maybe an extension like this could do something similar, didn't test but maybe gives you some ideas github.com/ThereforeGames/unprompted
@@pixaroma Genial, thank you!
@@CsokaErno I use XnView MP as my default image viewer. It has meta-info tab on the right, no need to import images anywhere, you can just copy, alt-tab and paste prompt+properties into your browser. Besides that, it's a really handy piece of software compared to vanilla windows image viewers.
Love your video, How do I run forge on google colab pro do I just change my automatic 1111 notebook or do I need something else?
Sorry i cant help with colab, usually there are just colabs made for forge, not sure if someone made for last versions
@pixaroma well thank you very much I will return to comfyui, and it says automatic 1111 is not taken care of yet yes many errors
Nice! Thank you. But unfortunately ControlNET is dead in this built.
For me works if the image size, width and height is divisible with 64
Nice video and tricks bro, thanks!
...20:06 🤣😂😅
✨👌🙂🤗🙂👍✨
Best tutorial ever
i wonder if any of the stable diffusion UI makers (forge, automatic, comfyui etc) has considered a method for capturing 'recommended model settings' like you point out at 3:29 - as going out and hunting down a model's recommended settings is a work slow-down; perhaps be able to configure a 'model or ksampler template' that can be a quick preset based on the model.. would be kinda cool to have the option to be able to on checkpoint load to trigger the preset (but again should be an optional thing, not everyone would want that in all cases). if this already exists someone let us know
There is a preset saving extension so you can just save settings and give it a name similar to the model you use to know for what it is, but many extensions have bugs since with updates
Check the extension in this video to see if still works th-cam.com/video/89YRfqArm-Y/w-d-xo.htmlsi=1va366VyvAt6s1f8
@@pixaroma You rock! thanks for those informative replies! will check that out. -- updated, yep the config preset still appears to work!
Thank you ❤
great video! thanks very much. quick question, do you use tts for narration? If so, it's incredible, may I ask which one? I've been trying to find something decent for my videos. Cheers :)
Is called voiceair they have the voices from elevenlabs
thanks alot! I'll have a look at it@@pixaroma
Thanks for your Vdo. I have a question? My version doesn’t have training tab? How could I add it! Thank you in advance.
I don't know how to add it to new version but you can downgrade to the olde version that had that
Out of curiosity are you using an AI voice tool or is this your real voice? Some crisp amazing voice if its all you :D
Yeah I am using a voice from voiceair.ai , my voice is ok but my spoken English is not so good and the accent is too strong, I am better with writing, that why that voice can produce a clear English and anyone can understand me, and sounds good for an AI voice.
the real tutorial we need is how to run ForgeUI using colab, there are currently no colab notebooks provided for it.... some of us have colab pro subscriptions and want to run this on the big boy GPUs
I haven't used colab for months because I upgraded my PC, i just share tutorials on how I use it and the knowledge I got so far, sorry i can't help more
Hello, really great tutorial. I have a question I want to use the 4xVALAR upscaler but have no idea where to put it. Could you please answer if you have an idea in what folder exactly it should be.
Go to your webui\models folder and there create a folder named ESRGAN, so you will have webui\models\ESRGAN path, and in that ESRGAN you put that upscaler model. That worked for me, hope it works for you.
excellent guide! subscribed!
This was excellent!!!!!
You should definitely meet your goal with this video!!
Thank you ☺️
@@pixaroma do you use an app to get the time stamps?
Like the chapters on the TH-cam? I use tubebuddy
thanks for the tutorial. i have many models in stable diffusion. can i use them in forge ui?
Yes you can use it just like in other interfaces if is in the right folder or your settings are changed so it can take it from the folder you put them
Great job keep up the good work
Great tips, thanks!
Thanks for the great video. One question here; does Forge works with AMD GPU ? Or it only run with Nvidia GPU ?
From what I know it search for Nvidia driver so no it doesnt work, many fail to load on AMD, i saw one commenting somewhere that got it to run on amd one UI called SD.Next by Vladmandic
@@pixaromathanks for the reply. i will check it out. much appreciated
Thankyou for this tutorial! ❤Do I need the automatic 1111 stable diffusion installed to be able to install forge? I have the oldest version of automatic 1111 installed and I hvnt used it or upgraded it as I cudnt keep up with the every new update and other troubleshooting issues as I hv zero knowledge of programming language 😢
You don't need to have it installed it for forge to work, is different UI similar to Automatic, you just install it in different folder
Thank You! Brilliant👍
I'm new to this Stable Diffusion GUI. Experienced people can you please answer is this Forge WebUI is better than Fooocus MRE? If Yes, then in what parameters is it better? Thanks!
You can have both installed and play around, just put it on a different folder. It has more options and extension then fooocus from what i know
The tiling bug has been also fixed. I just checked on a generated image
Awesome, thanks for info ☺️
Great stuff. Is it possible to run it on AMD?
From what i know it needs Nvidia, so not at the moment
8:37 yes it same cute cat we see everywhere. its picture on model. so far nothing havent AI
:)
bro you awesome, thanks
This is a great tutorial, but for me using it on Ubuntu makes feel a bit sad, cause most of extensions simply doesn't work, or doesn't want to install. Maybe because of GPU (rtx2060,6gb), but when I had Windows system on same machine, it had more extensions preinstalled and used. Like I don't have Free U and Control Net SDXL. As well as I remember, it worked more better than now. Did they made some new updates, which made it work worse? (Last time used 3 months ago)
They stopped updated for the official version, there are some versions that still around but not sure how much update they get., you can try the last stable version or change to dev2 th-cam.com/video/RZJJ_ZrHOc0/w-d-xo.html. I have on older pc same gpu, but i wasnt able to run with control net, it crashed. ComfuUI works ok, but didnt try yet complex stuff
@@pixaroma Thank you!
Great tutorial, thanks
Thank you ☺️
6:21 Wow, didn't knew about it, I through the only way to change it is to edit it manually in some file I don't remember now.
Still, I would like it to have different defaults for each checkpoint, is it possible?
Try this to see if still works , they keep updating the forge so it still have bugs th-cam.com/video/89YRfqArm-Y/w-d-xo.htmlsi=kGI45gnzc7iYeFHX
So, in the arguments section where you put the dark theme I can add:
--pin-shared-memory
--cuda-malloc
--cuda-stream
For optimization, right?
Thanks for the video!
Yes, i tried all those as forge suggested but didn't make it faster on my rtx4090 but slower, maybe it does better for you but for me with bo argument was faster
@@pixaroma gonna try it on my 4080.
It's not working. Is it really on web-ui.bat file that I should put the arguments?
is in the webui-user.bat look for set COMMANDLINE_ARGS= and there after equal you add it, like I added the dark theme you add more, set COMMANDLINE_ARGS=--theme dark --cuda-stream
is it possible to setup all of this on AWS? could you please make a video? I am looking for an sketch to image model
I dont use aws but saw online that is possible, stable-diffusion-art.com/aws-ec2/ check this article maybe it helps
One day, could you make a video about how to fix bad eyes and hands? Or you have it but I can't find?
Flux nodels can do that most of the time without fixing, so if your video can handle flux you should try. I have video on flux for both forge and comfyui, just forge is still work on progress, works with flux but other things still don't work yet, they are changing the interface and need time to fix all the things
@@pixaroma Yes, you are right, but in this case I talked about only SD. Flux can handle hands and eyes pretty well but when two hand are close or touch each other, it makes fault. Anyway, I suppose it will be fixed very soon like everything else in AI realm :)
@@CsokaErno with sdxl I got ok results if i used control net, there are some more nodes that can be used, I will see if I get enough info to get an episode about that in the future
@@pixaroma Thank you.
Thank You So Much - Bro 😍
You are welcome ☺️
Thank you so much. Can it be installed in collab as well as fooocus?
I am not sure, maybe someone else can answer that
@@pixaromathank you, hope that some one can answer. I don't have the requested performance machine to do local installation, so that will give me a great help. why I'm asking? it's just for the seamless pattern setting that exists in the models presented. this capability isn't offered right now in fooocus witch is easily accessible with colab.
It don't mess with the normal AUTO installation, it use his own ENV?
Yeah you can have both it doesn't mess with a1111 or others
I did run.bat, it showed runtimeerror: Torch is not able to use GPU, what happened?
Either your video card is not good enogh or forge dont recognize it, I am a designer not a coder, but you can try this in webui-user.bat add the following arguments to see if it works, it need at least 6gb of vram and prefers nvidia cards but try it anyway:
set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half
I see someone already posted taht on the bugs area, you can watch that to see if it gets any response if nothing else works, github.com/lllyasviel/stable-diffusion-webui-forge/issues
I have attempted many times but am unable to get this to run with AMD R9 580 looking for the CPU ver
I think is looking for a Nvidia driver, you can try comfy UI or automatic 1111 but depends on the video card vram
Thanks again for the video! I have a question. Will this model work on my video card if it is only 8 GB? and if not, what options can you recommend?
I tested on 6gb and worked, only control net give me some crashed, but other things worked for me, faster then automatic that took ages. It worth a try, and if you dont like it how it work you can just delete the folder with all files. But i believe you can use most of the functions, if automatic1111 works this should work even faster
Wow! Thank you so much, I'll experiment. @@pixaroma
you only had to watch the video for 30 seconds and your question was answered holy shit
@@schinie3777 That question is not for this video!
I runned forge on 6gb rtx 2060 so it should work, other solution will be comfy ui since forge will not be updated anymore, and once they update automatic 1111 that will be also a good solution
How to use A1111 extensions? I'm trying but it auto turns em off. And integrated extensions are 💩. Help
not sure if all those works, but did you installed them from extension? Go to extension tab, click on available, click on Load From button, that will load all, search for an extension, for example tried ratio helper in search and installed just well when i clicked install, and restarted forge.
How big size is the file? 60GB without model? And each model is around 3GB or 30GB?
I am not sure about the total size, but usually a sdxl model is 6gb and a 1.5 model is around 2gb.
Thank you very much! I tried it. It can generate and download images, but the window that shows the generated image does not work. What should I do?
I am not sure, can you join my Facebook group and you can you show me maybe some screenshots or post there to take a look, do you get any errors or what it looks like
Go to settings and paths for saving. Set save file paths (output dirs) to full paths, like C:\pathtoyourimagedir
It worked! Thank you very much!!@@Dark_Lobster
quick question what's with ipadapters, I cannot acces any preprocesors there are only 3 encoders available, am I missing something?
You can read more about here, i didn't play with them in forge only with canny control net mostly, also keep in mind the version you are using there are different forks of forge now, the main one is used for beta testing and many things might not work! github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178
I have NVIDIA graphic card but i get this error when run forge: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. how can I solve this?
so in the last month there has been a lot of updates to the forge ui, today I am working on an update video with what is new. Go to this discussion page to see what they changed and in comments some people had the same problem like yours and it seems has something to do with forge, on the bottom of the page you can see comments and click to load more comments, and with ctrl+f you can search for "torch is" to find comments that have those words github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981 also check this page in the comments how some used different settings in arguments - github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1742
thanks for the controlnet section I got stumped on where to put the models. Edit: can anybody help me with LORA models, I paste them in the LORA folder but FORGE doesnt seem to detect them
i just tested now with a file and seems to be recognized, webui\models\Lora so the folder is Lora, and after you past it there you go to lora in the interface and refresh the page or just restart the stable diffusion so it can see it
What TTS model/software are you using?
VoiceAir ai, they have the voices from elevenlabs
20:04 wtf was up with that laugh, is this an ai voice cuz thats pretty good
Yeah is an AI voice :) elevenlabs
Niice, thanks for sharing
which GPU do you recommend for the lower budget?
Any Nvidia rtx that you can afford, more VRAM the better. Minimum 6-8 GB of vram, but if you can get more you will generate faster
Little question: did anyone have a problem with LyCORIS models on Forge? I'm using Forge through Stability Matrix, and no matter how I load -- from my computer or Matrix's models loader -- it just doesn't show up in Lora's tab. And when I load it in Lora's folder, it doesn't work correctly.
I didnt use lycoris yet, but I hope someone can help you with an answer . Is it possible that forge it still has some bugs
i dont have a update.bat file. where do i find it? i can run forge just fine but been trying to find out how to update.
Should be next to run.bat and environment.bat a file called update.bat i have it there since installation, your should have it to. Just careful with updates to have a good stable version, check this video th-cam.com/video/RZJJ_ZrHOc0/w-d-xo.htmlsi=rF-9wCmzResJiW3L
My built-in controlnet's IP-Adapter is missing its models, and thus, doesn't work. Any ideas? I wanted to install them manually, but the library is different, and so are the files.
Forge still has some problems with control net, check this discussion maybe it helps github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178#discussioncomment-8572388
All this stuff and for some absurd reason they removed the "refresh" button for all tab files.
It is not perfect, I hope they can update it soon since it has been a while since the last update
@@pixaroma IT has it now; but there are bugs that cause Loras not to generate and it just displays "Error". It's amazing when it works, but annoying how often ForgeUI just has problems.
Eventually I just learned how to use Comfy UI and enjoyed having something that actually just works.
@@sociallyresponsiblexenomor7608 yeah that why i switched to comfy also and created that comfyui series, I learn new things each day and got used with nodes
What if you use both 1.5 and XL checkpoints? Do you have to keep manually changing the FreeU settings everytime? Also with Hypertile, should the tile size be half of the generated dimensions longest side? what if I am usijng x2 hires fix, should it be half the size of the hires fix? What does the hypertile swap size and depth settings do? and is there a way like in SDNext to have hypertile set to automatic mode?
I don't use those settings too be much to give you more details, i know that for control net you have to manually keep changing models, 1.5 is different from sdxl so it needs different models and settings
@@pixaroma yes that's to be expected for control net as 1.5 and xl use different model. That's easily done. What's not easily done is having to remember the precise settings for freeU for 1.5 and xl. Can't believe there is no preset option to load these on the fly. I also am disappointed that there is no auto mode for hypertile tile size. Damn, all these different UIs should come together and make one ultimate ui
How do I add the "ip-adapter_face_id_plus" preprocessor for IP-Adapter? It's not in Forge. "ip-adapter_face_id_plus" working better then "InsightFace+CLIP-H (IPAdapter)"
someone said that the name are different, check this page discussion github.com/lllyasviel/stable-diffusion-webui-forge/discussions/178
Thanks!
Could you please tell if this can also be run on AWS sagemaker?
Maybe look here gist.github.com/ShMcK/d14d90abea1437fdc9cfe8ecda864b06 aws.amazon.com/blogs/machine-learning/use-stable-diffusion-xl-with-amazon-sagemaker-jumpstart-in-amazon-sagemaker-studio/ as I don't use aws I can't hellp
please i got this error when i started run.bat
C:\Users\moor\stable-diffusion-webui-forge>python launch.py
Traceback (most recent call last):
File "C:\Users\moor\stable-diffusion-webui-forge\launch.py", line 1, in
from modules import launch_utils
ImportError: cannot import name 'launch_utils' from 'modules' (C:\Users\moor\AppData\Roaming\Python\Python310\site-packages\modules.py)
Not sure what is with error, but what you can try is to create a folde on another drive and try again fresh maybe can make it run, I don't know coding but looks like can not important a file, maybe something didn't download or is a bug, that why I say to try a fresh install in another folder
Just updated to latest Forge version , the one that can work with Flux, but using only Sdxl on my 8gb card : every time I do inpainting or img2img the result has lower saturation than the original, it's me or what? Assigning a VAE do not solve 😢
There are a lot of bugs on new version so it will take a while for all to get fixed, this has a similar problem github.com/lllyasviel/stable-diffusion-webui-forge/issues/1189 and if you look at the list of open issues are like 600 github.com/lllyasviel/stable-diffusion-webui-forge/issues
@@pixaroma thank you for your answer!
Where is train tab? Why i don't see it now?
Check the pinned comment, the new version don't have it only if you downgrade to an order version
I use forges deforum tab to create animations. I would like to know how to create the animations within a boundary. I projection map so I would like to know how to keep the animations within the map of my house. Would you know how to accomplish this? I have a png map file that I created but unsure what to use it with.
TIA
Sorry, I didn't play yet with deforum, so i can't help there yet, I like to create HQ images and the video and animation isn't quite there yet, i am waiting for an improvement before i jump in to it
Question, FORGE is compatible with HYPER and LIGHTING models?
I used a lightning juggernaut model and worked I so they released hyper also but didn't test it yet
@@pixaroma Ok, thx.
@@pixaroma ADETAILER works too?
@@ZeroCool22 i think it was some problems with ad detailer, and some extensions. For control net for example for me only works if image size width and height is divisible by 64. But just try for things that works and work faster and use a1111 or other for things that doesn't work :)
Brother, Do you know how yo generate prompt from image for free as some websites are charge
Forge has some basic prompt from image but is not so accurate, in img2img tab under generate it has a paperclip icon, first time will download a model but after that should work faster, and is giving a basic description of the image you uploaded in the img2img.
How do I share existing models on A1111 with forge?
You can edit the forge bat file i explained in this video th-cam.com/video/q5MgWzZdq9s/w-d-xo.htmlsi=VQDUDjPvi256KCps
There is no backwards compatibility with dynamic thresholding when using png info
there are still some bugs and extensions that doesnt work yet, probably will be fixed in time..
Where should we put lora files? There is no lora folder in models
You should have a folder for lora, look at this video how i download and where i put them th-cam.com/video/q5MgWzZdq9s/w-d-xo.htmlsi=nKX2enJ7KPEAoGIF
What is the difference between Auto111 and Forge?
Mostly in optimization on how it handles memory, so it generates images faster then a1111 and have some extra things, but it stopped being updated officially so i switched now to comfy UI
@@pixaroma damn u.u any tip for someone with a 3060 who wants to use pony whitout waiting 30 minutes for an image?
hi my installed Sd forge doesn't have the update.bat file. Is there anyway to update SD forge without the file? Maybe by adding arguments to look for an update?
It should have it there next to run. In the folder you extracted not in the webui folder
Is the inpaint background extension available in Forge?
Usually those from automatic are also on forge, but not sure if all works, you can try and test it, i dont usually use outpaint because it doesn't always do a good job, for that i prefer Photoshop generative fill
From your video with the purse, and the drinks can in the desert, I understood that Inpaint Background took account of, say the lighting, in the masked-out subject when creating a completely different background, as compared with a simple remove/replace background ignoring the masked area. Have I misunderstood?
Does Photoshop Generative Fill allow a completely different background prompt, or only an extension of the existing image within a larger canvas?
@@johnclapperton8211 when you do with inpaint it look around to be able to paint better, but is not always perfect. For photoshop when you expand with crop it does automatically, but after you can select that part that was generated and give it with the prompt what you want in there
I installed a couple days ago, and when I try to render images, it shuts down my pc, which hasn't happened with MRE or A1111
this is on a 3060
That is strange, you can try report the issue on their page github.com/lllyasviel/stable-diffusion-webui-forge/issues maybe is a bug and so how it used the video card to much and make it crash, but that should not happen. I only tested on 2 computers, and i didnt had a problem on rtx2060 or rtx4090. Try an update in a few days maybe is get fixed. I wish I can help you more, but I am a designers not a programmer, I just share how to use new technology to get images with Ai, when is come to actual code, and bugs if it didnt happened to me not sure how to tell you a fix :(
I do not see the SDXL styles that you show in the video at time 18:12 - how do I enable that.
I explain it in this video, is a file i created that you can download and put it in the right folder th-cam.com/video/UyBnkojQdtU/w-d-xo.html
@@pixaroma got it
I have problem with launching Web UI (1:42). "Found no NVIDIA drivers on your system..." Am i able to run it on rx 580 8gb?
I think it only works with Nvidia Video Card for now, that is why it says it didn't find a driver
The controlnet doesn't work with me... the openpose the SD is ignoring the pose and... did something random and nothing to see
Not all control net models works, i use mostly canny you can see in my other videos, the rest of them are not working how it should.
@pixaroma Owww... I understand, I haven't seen the other videos yet, i dint have time, I came here to see the correct folder to paste the file. Do you have any tips for me? I have a Ryzen 5600g with 32GB RAM and a 3060 with 12GB. What is the best SD for me to install here?
@@alterinfo9448 I suggest swarm ui or comfyui, I am adding in a hour a tutorial on how to install both including stable diffusion 3
can you add image to video with this?
The latest version doesn't have the svd video tab only the older version. Maybe they will add it back in new updates
Pls installation for Mac users
I don't have a Mac so i can't help you there, sorry
Hello thanks a lot , i want to downgrade to Forge UI 1.7 but i cant find any information
I never did it but someone commented with this on reddit Use command prompt in SD directory and type git revert or git reset --hard . You can find the previous version hashes using git log, or there is a list somewhere on the github. So for forge probably you have to go to forge and then to web ui folder, then in the address bar type cmd and press enter, then you can see all those commit with a string of numbers, then I am not sure the next part either you use git revert and put that comit number or something like that
@@pixaroma thanks ill give it a try !
what AI do you use for voice generation?
VoiceAir Ai
@@pixaroma ty