You can now support the channel and unlock exclusive perks by becoming a member: th-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.
Your tutorials are pretty much the best I've seen ever. Most others either leave out important details or if they don't the whole thing is 2 hours long. You are the only one who keeps it short while still including every single step and lots of useful information. Much appreciated. Thank you!
I don't comment often on people's tutorials because most of them suck. Yours are excellent. They are detailed, clean, and to the point. You clearly took the time to create these videos and I thank you for that.
I've been playing around with Flux on Forge for a few days, this is the best "how to get started" video I've seen yet! And a learned a bunch of other basic stuff too!! Thank you! You earned Fi Dolla!! 🤑
Thank you very much, I always try to do videos that can help, and how i wish to find videos on any subject, explained so beginers can understand and yet not too boring for advanced users to skip. Since the channel grows slowly and I don't earn much, any contribution helps me, so thank you for your support 😊
the best "how to get started with offline genAI" tutorial I've found! fantastic stuff! This is what we REALLY need. People to teach other people how to get their feet wet and be participants not observers in the AI revolution, or at least play with it and understand what it is-- and what it's not -- when politicians and breathless "tech communicators" start ranting about it.
Hello brother, may I ask where you are running flux? Ever run it on mimicpc? Everyone says it works great with it, but maybe due to my limited ability, the images generated are not to my satisfaction, and I would like to know where I can tweak it in terms of details?
very nice tut buddy. easy to follow and provides all the little extras that come in handy. not many on you tube have this talent. please don't change your style.
Thank you for this information because my head is exploding with building up my model but also the installation of it all. I tried SDXL (Which only has Flux Lora's as the moment), ComfyUI (Which almost fried my pc due to a faulty setting) and now Forge which is an absolute beast. i will be looking foward to your next upload
Thank you so much for this tutorial. I have been trying to get Flux installed locally for a week ( I'm kinda slow) and this is the first tutorial that clicked with me and now I'm generating locally. So happy... although I definitely need a new GPU LOL.
Thank you for the styles. Also thank you again for the tutorials. I was thinking that "Forge" is much to complicated for a beginner, but with the informations from your videos it's becomming a piece of cake. 😉
Great video. It's so hard keeping up with the advances of Flux and trying our best to use it outside of Comfy UI makes life much easier. The problem is using the best Flux model that won't drain the life out of your VRAM in a another UI. Nothing against Comfy UI, there are just times when you want things less complicated and Forge UI to the rescue.
Great video with detailed info. Since I've discovered it, Forge is definitely my favorite platform. I am still on a slow RTX but it's a trooper, will try the SHNELL flux model really hoping it is good enough to hold it. Fingers crossed! :)
Heared about ForgeUI Yesterday, finding your Video was a terasure of Information how to get goin Appreciate the clear and nobs instructions and all the Tips You Sir, earned a Sub and a new Fan today! Have a great Weekend
You should check the comfyui series after 2-3 episodes will make sense, i was like you and avoided comfyui for months, but now you can really do much more with it, and i am trying to simplify the nodes so it looks similar to forge
I just fresh installed Forge on my pc, did follow both the 1 pack folder and the manual way. Both methods does not show me the TRAIN, SVD and Z123. I also followed the guide @16:24 but still the same, when i run the forge it still does not have those 3. I used the same exact commit as shown in the video to the rollback file, still same issue. Please take note that this is the very first time iam using and installing forge. I havent used Train, SVD and Z123 on Automatic1111. Do i actually need to install those first on my pc? thanks!
So yesterday when i did the video it worked ok, did you get a message or something that say it couldn't swich to that commit? I mean if all was done like in video and the forge ui was the same , you should get the same result. When you started forge look if it start with the version you put in the commit, if it starts it should have that old interface. But that is useful if you need svd, but i didn't use it in months since are better ai for video like kling and lume , raleway
I'm running a 4070 TI with 32RAM and 2TB of M2 but my renders take about 3 minutes. Tho i must say the flux is out of this world. The quality but also the reduced requirements of your system really makes this the best at this moment
try more updates, for example when I finished the video I did another update, and on older pc was faster with Diffusion in Low Bits on automatic, for dev nf4 only took less then 2 minutes on rtx 2060 with 6gb of vram, so try updating and see if is faster, since rtx2060 is weaker card then yours so it should not take 3 minutes for a nf4 model, schnell nf4 model is 27 second on rtx2060 and 3.4 second on rtx4090
@@pixaroma sorry, i want to make sure - i have that 2060 RTX CARD, you said on the video i can generate a photo with DEV model, but takes 9 minutes, and 20 seconds for a schnell model?
@@liquidmind yeah that time taken but in the next day i updated again forge and now take 27 sec for schnell nf4 and 1 min 40 sec for dev nf4 so probably was a bug when i tested and they fixed that
@@pixaroma ok thats great! one more question, if there are many demo webpages for free and paid subscription, some of them even use a BLEND model of DEV and SCHNELL, why is it differente to use locally? is it uncensored only locally?
@@liquidmind mostly ask money because you are using their resource, the power cost money, computer running, and big video cards all cost money. Probably online they put more safety and make it more censored. But the models are the same or a mix of models ot add some extra lora to get more style and make a fancy interface. For example you can take the flux schnell model, make an interface for it, put it on your server and put it online and ask people money for generating with it. Other just use api, there is pro.model that give you certain nr.of generation for certain amount of $ , so you could build a fancy interface and call it how you want and ask people money to generate, more than the api and server maintain cost so you are on profit.
Only if you try it, the system can influence a lot how things work, for some might crash for others work, depends on many things so just try it. See if works with sd, then try fkux schnell maybe
yeah is AI :) is hard to keep it consistent since I do text in a few sentences, i dont do a text for all video, i watch the recording and write the text for 1 minute of video, then i convert to audio then an synchronize it with video, then repeat :)
@@pixaroma Makes sense! Thanks again for the video. I've been using AUTOMATIC1111 with SD for about 18 months and this helped with the minor differences switching to Forge and FLUX.
I have an RTX3060 with 12GB of VRAM and 32GB of system RAM. Using the dev.bnb.nf4.v2 safetensor with an Async swap method and shared swap location, I get a render time of around 1 min 35 sec. With Schnell and the right step count of 4, my render time is 30 seconds. Some of my schnell renders have come out pretty good. Considering the pricepoint of the RTX3060, I'm happy.
That is good, on my rtx2060 6gb of vram for schnell nf4 i get 27 sec, i wonder why you don't get a faster speed for that since you have a better card. For dev nf4 i get 1min and 40sec, i have been using auto default
@@pixaroma As well as changing the default swap method and swap location, I changed the image dimension to 1024*1024. Changing my swap method back to queue, I get a render time of 1 min 22 seconds for dev-nf4-v2. I think you just have to play around with the options to see what works best on your system.
1 min 35 sec sounds insanely long to me. I got i9 13900k 128gb ram and either 4090 or 4070ti super to compare, so yeah I understand the hardware difference but mine takes about max 20 seconds to render an image.
@@moelleunbelievable 1-2 minutes render time is typical for the 3060 using the dev model. If you check the comments, you'll see that some people's systems are taking over 3 minutes to render an image with similar or superior hardware. I just updated forge and rested. dev is now down to 1 min 20 sec and schnell is down to 21 sec. For most images, the schnell(fast) model is fine. Later model cards will of course be faster, but they have a poorer price/performance ratio.
What could be wrong here? at 2:33 in the video, on his checkpoint, he has a model, while I don’t have any model even though I did the same steps he did , pls i really need help
Maybe you don't have long path activated in windows and wasn't able to download it, you can download it manually , there are a lot of modles on civit ai
12:13 I have a 2060 with 12 GB, and still, making the cat image in 4 steps with that model takes me 23 seconds. Is that normal? The settings are the default ones from Flux. And if I go a few seconds without making images, it can take up to 40 seconds when I resume
depends on your system, i have also 2060 with 6gb but i have 64gb of vram and the forge is on fast ssd. So are many things that it takes in account, and first time is slower, so it might, not sure, or maybe is a bug and in a few days if you update maybe works ok, is hard to tell
I'm suprised you were able to get flux to work immediately. I get AssertionError: You do not have CLIP state dict! which seems to imply I need to install other things for the model to work.
Make sure you are using the right t5 and clip models, they changed the interface after i done tutorial, so you need the right clip models for the right flux model
Keep.looking for recent commits that says verified, there are a few new version since i made the video 2 days ago :) they update it every hour, hope one of them will be stable enough
Quick note if you want to revert or update torch or xformers ver you can use command in .bat file with notepad add --reinstall-torch or --reinstall-xformers(in webui folder)
Is possible to not work, i see they keep working on lora, hires fix also had some problems, each hour u see new updates so maybe look at the commits page to see what they fixed
it still has a lot of bugs, maybe you can report it here github.com/lllyasviel/stable-diffusion-webui-forge/issues as you can see it has over 700 open issues, so not sure what model you use that is not recognize, if it work with other model must be a bug recognizing that model.
If is first time run make sure all the files extracted right, sometimes when you extract the zip doesn't extract all files, maybe is that, but not sure what cause to not run it, maybe check their GitHub page
Sometimes it happens might be with long path not being active in windows or was an error when trying to get it from hugging face or where was model hosted
@@stephenzadach1870 maybe try to download again and extract. Also is faster on a SSD, try on different folder maybe, I never had problem installing so not sure what could be the cause on your system
thank you for this, only issue for me is that lora's dont work for me i get this issue, how can i fix it? Patching LoRAs for KModel: 74%|███████████████████████████████████████▏ | 225/304 [00:34
First of all from what i know they still have problems with lora, and a few other things not sure if they are fixed. then usually when say out memory is because you don't have enough VRAM on your video card. So is your card good enough? That could cause some problems. But also the version, try update every day and see if you can get a version that work, i see updates every hour adding or fixing something on the commit page like i showed in the video
@@BombXXplosive i saw some saying the flux lora doesn't work yet with forgez only normal lora with other models, so that might be the cause, you should be able to use lora at 12gb
I didnt try it, but there are still things that doesnt work, if you look at commits like every hours or even minutes there is a new version, so they keep fixing and adding new stuff, eventually all will work but probably takes some time github.com/lllyasviel/stable-diffusion-webui-forge/commits/main/
Thank you for this, I needed the discussion on samplers. (so far DPM++ 2M / SGM Uniform seems best for speed+quality!) I'm running on a 4070 (12GB vram) and 64GB memory - with the full DEV model and T5xxl_fp16 - and I get between 3.2s/it to 6.5s/it (so renders take 2-3 minutes each, but at really amazing quality! Strangely resolution isn't a huge impact on speed, obviously steps are.) So yes, ForgeUI can run the full DEV FLUX.1 without a 4090!
rare, i have 2060 6GB VRAM, and i get faster renders,,,,,, ? why? i can schnell a 4 steps photo in 20 seconds and DEV a photo in 1:20 minutes - 20 steps and im using the bigger DEV fp8 model, - might the sampler you are using is slower...... i use SIMPLE instead of SGM UNIFORM, maybe thats it.
I have an RTX4060 with 32GBs of Ram. Just curious if I should go with nf4 seeing as I have the specs, or would the quality still always be best with the nf8? Would there really be a notable difference?
i was told that flux is what i was looking for to try to get a consistent character in the same pose but multiple different camera views. do you know if this is possible and why someone recommended flux?
I saw in comfyui someone using it, not sure in forge, flux is better in prompt understanding so probably you can get more characters on one image in different angles only using good prompts. you can also check this for comfyui www.reddit.com/r/comfyui/comments/1elfcef/flux_consistent_character_sheet/
they changed some things in the interface, make sure you have the right clip models, check this post github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050
I didn't get the question but the flux 1.0 the one from video is free to use the output the generated image For the new one flux 1.1 pro is only available online with api so no download for that one.
Since with Comfyui i dont use forge anymore, I have all the new things in AI world first in comfyui and is more stable and gives me more control. I try to make comfyui workflows to look more simple so beginners can still use it
I saved the bat in google drive drive.google.com/drive/folders/1bS-6HdLl5AH3Rbd2wHUm_nILUOnu9hmJ?usp=sharing Create a bat file any name you want, something like rollback.bat and add this text inside @echo off set PATH=%DIR%\git\bin;%PATH% git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 pause When you run that rollback.bat it will load that specific version Then create a rollforward.bat and add this text inside @echo off set PATH=%DIR%\git\bin;%PATH% git -C "%~dp0webui" checkout main pause
@@muppetboneheadxdd2606 I dont know, only if you try to do it manually adding the settings like in the video. I haven't used forge for a while now since I switched to comfyui...
I think it doesn't work yet for forge, i saw some using lora in comfyui but for forge with nf4 version at least they said didn't work yet, you can try updating forge from time to time maybe it gets fixed
Not sure, only the bigger size worked for me the one I used in the video, probably the will make it work eventually but not sure how smaller they can make it without loosing quality
Great guide! Followed all steps but flux dev on default settings is crashing during model unload. Any ideas whats going on? 4070 super and 32gb of ram Thanks!
You can now support the channel and unlock exclusive perks by becoming a member:
th-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.
Your tutorials are pretty much the best I've seen ever. Most others either leave out important details or if they don't the whole thing is 2 hours long. You are the only one who keeps it short while still including every single step and lots of useful information. Much appreciated. Thank you!
Thanks!
Thank you so much for your support ☺️
and thank you from a viewer for supporting him. You help all of us as well.
I don't comment often on people's tutorials because most of them suck. Yours are excellent. They are detailed, clean, and to the point. You clearly took the time to create these videos and I thank you for that.
What a great and clear introduction to Forge-Flux Combo, up and running on the first try :) Thanks!
yes!
I've been playing around with Flux on Forge for a few days, this is the best "how to get started" video I've seen yet! And a learned a bunch of other basic stuff too!! Thank you! You earned Fi Dolla!! 🤑
Thank you very much, I always try to do videos that can help, and how i wish to find videos on any subject, explained so beginers can understand and yet not too boring for advanced users to skip. Since the channel grows slowly and I don't earn much, any contribution helps me, so thank you for your support 😊
the best "how to get started with offline genAI" tutorial I've found! fantastic stuff! This is what we REALLY need. People to teach other people how to get their feet wet and be participants not observers in the AI revolution, or at least play with it and understand what it is-- and what it's not -- when politicians and breathless "tech communicators" start ranting about it.
You do not need to restart UI to add styles, just use the Edit style button and refresh the styles with the icon. Well made video tutorial!
This is the first video I saw on the channel and I really want to thank you for the detailed tutorial and the amazing styles u provided 🙏
Excellent not only for Flux but for basic Forge installation and usage. Best! Thanks ! ! ! !
Hello brother, may I ask where you are running flux? Ever run it on mimicpc? Everyone says it works great with it, but maybe due to my limited ability, the images generated are not to my satisfaction, and I would like to know where I can tweak it in terms of details?
very nice tut buddy. easy to follow and provides all the little extras that come in handy. not many on you tube have this talent. please don't change your style.
Thank you ☺️
This is awesome work man. I never comment on stuff but this is hella detailed.
Thank you for this information because my head is exploding with building up my model but also the installation of it all. I tried SDXL (Which only has Flux Lora's as the moment), ComfyUI (Which almost fried my pc due to a faulty setting) and now Forge which is an absolute beast. i will be looking foward to your next upload
Thank you so much for this tutorial. I have been trying to get Flux installed locally for a week ( I'm kinda slow) and this is the first tutorial that clicked with me and now I'm generating locally. So happy... although I definitely need a new GPU LOL.
Awsome, precise guide without being too technical. Thank you so much.
This was an excellent all round tutorial for lots of concepts and comparisons, thanks for taking the time to do this, very muc appreciated.
I was exhausted running cmd commands. Thank you so much for that roll back technique.🧡 Forge + Flux seems unstoppable. 🤩💪
Thank you for the styles. Also thank you again for the tutorials. I was thinking that "Forge" is much to complicated for a beginner, but with the informations from your videos it's becomming a piece of cake. 😉
Great video. It's so hard keeping up with the advances of Flux and trying our best to use it outside of Comfy UI makes life much easier. The problem is using the best Flux model that won't drain the life out of your VRAM in a another UI. Nothing against Comfy UI, there are just times when you want things less complicated and Forge UI to the rescue.
This is very comprehensive. Thanks for the hard work 😊
Thank you for the tutorial friend, keep creating and being awesome!
Great video! I feel nostalgic watching Forge again.
Great tuto, i love how you simplify things, keep going
Thank you very very much for this insightful video. I used it as a tutorial to install flux on my machine.
Danke!
Thank you so much ☺️
Thank you! best tutorial around for sure!
FINALLY!!! A WAY TO USE FLUX WITH ONLY 6 GB VRAM!!! It takes 20 seconds for the schnell model and 1:30 minutes for the dev model!!!!
Great video with detailed info. Since I've discovered it, Forge is definitely my favorite platform. I am still on a slow RTX but it's a trooper, will try the SHNELL flux model really hoping it is good enough to hold it. Fingers crossed! :)
short and sweet. thank you very much sir!
You rock big time !!! Thank you for your valuable explanations 🥰
Great tutorial, thanks for the info and links.
Thank you very, very much. You change my life :) More, more of this, please!!!
Thank you. That was really great introduction.
@pixaroma but after installed all files image section is blank not generated images shows, what shall i do ?
okay i managed to extract the file ! it works, thanks !
Prob the best video on flux I've seen
Heared about ForgeUI Yesterday, finding your Video was a terasure of Information how to get goin
Appreciate the clear and nobs instructions and all the Tips
You Sir, earned a Sub and a new Fan today!
Have a great Weekend
Thanks for the video and for the rollback script! Now Tiling works again.
Thank you for this! I've been waiting. I just can't wrap my old mind around ComfyUI.
You should check the comfyui series after 2-3 episodes will make sense, i was like you and avoided comfyui for months, but now you can really do much more with it, and i am trying to simplify the nodes so it looks similar to forge
I just fresh installed Forge on my pc, did follow both the 1 pack folder and the manual way. Both methods does not show me the TRAIN, SVD and Z123. I also followed the guide @16:24 but still the same, when i run the forge it still does not have those 3. I used the same exact commit as shown in the video to the rollback file, still same issue. Please take note that this is the very first time iam using and installing forge. I havent used Train, SVD and Z123 on Automatic1111. Do i actually need to install those first on my pc? thanks!
So yesterday when i did the video it worked ok, did you get a message or something that say it couldn't swich to that commit? I mean if all was done like in video and the forge ui was the same , you should get the same result. When you started forge look if it start with the version you put in the commit, if it starts it should have that old interface. But that is useful if you need svd, but i didn't use it in months since are better ai for video like kling and lume , raleway
THANKS SO MUCH. YOU DA MAN! SUBCRIBED!
Awesome video! Thank you!!
Great stuff man, thanks!
I'm running a 4070 TI with 32RAM and 2TB of M2 but my renders take about 3 minutes. Tho i must say the flux is out of this world. The quality but also the reduced requirements of your system really makes this the best at this moment
try more updates, for example when I finished the video I did another update, and on older pc was faster with Diffusion in Low Bits on automatic, for dev nf4 only took less then 2 minutes on rtx 2060 with 6gb of vram, so try updating and see if is faster, since rtx2060 is weaker card then yours so it should not take 3 minutes for a nf4 model, schnell nf4 model is 27 second on rtx2060 and 3.4 second on rtx4090
@@pixaroma sorry, i want to make sure - i have that 2060 RTX CARD, you said on the video i can generate a photo with DEV model, but takes 9 minutes, and 20 seconds for a schnell model?
@@liquidmind yeah that time taken but in the next day i updated again forge and now take 27 sec for schnell nf4 and 1 min 40 sec for dev nf4 so probably was a bug when i tested and they fixed that
@@pixaroma ok thats great! one more question, if there are many demo webpages for free and paid subscription, some of them even use a BLEND model of DEV and SCHNELL, why is it differente to use locally? is it uncensored only locally?
@@liquidmind mostly ask money because you are using their resource, the power cost money, computer running, and big video cards all cost money. Probably online they put more safety and make it more censored. But the models are the same or a mix of models ot add some extra lora to get more style and make a fancy interface. For example you can take the flux schnell model, make an interface for it, put it on your server and put it online and ask people money for generating with it. Other just use api, there is pro.model that give you certain nr.of generation for certain amount of $ , so you could build a fancy interface and call it how you want and ask people money to generate, more than the api and server maintain cost so you are on profit.
Great video, thanks for what you do, keep up the good work!
Thanks, will do ☺️
Thank you, this helped a lot!
About what you mentioned at 5:32, I've got a 1650 on the PC I'm trying to run this. Will there be any issues other than slow generation of images?
Only if you try it, the system can influence a lot how things work, for some might crash for others work, depends on many things so just try it. See if works with sd, then try fkux schnell maybe
10/10 great tute!
So question, I get this: AssertionError: You do not have CLIP state dict!? Any ideas why?
check this github.com/lllyasviel/stable-diffusion-webui-forge/issues/1075 and this github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050
i mean it's the ULTIMATE GUIDE so i gotta watch it lol
what model are you using for your voice over? is it f5? it sounds so real, I could barely tell it was AI at first!
I use voiceAir, they have the voices from elevenlabs, so elvenlabs is the source
Which TTS did you use for this video?? ...It sounds pretty natural to me, nicely done!
voiceair and elevenlabs, voiceair has api from elevenlabs.
Huge amazing clear video, sub, like, everything bro.
thank you 🙂
I can't tell if the voice is AI or not - it flips between sounding real and fake constantly 😂
Great video all the same!
yeah is AI :) is hard to keep it consistent since I do text in a few sentences, i dont do a text for all video, i watch the recording and write the text for 1 minute of video, then i convert to audio then an synchronize it with video, then repeat :)
@@pixaroma Makes sense! Thanks again for the video. I've been using AUTOMATIC1111 with SD for about 18 months and this helped with the minor differences switching to Forge and FLUX.
I have an RTX3060 with 12GB of VRAM and 32GB of system RAM. Using the dev.bnb.nf4.v2 safetensor with an Async swap method and shared swap location, I get a render time of around 1 min 35 sec. With Schnell and the right step count of 4, my render time is 30 seconds. Some of my schnell renders have come out pretty good. Considering the pricepoint of the RTX3060, I'm happy.
That is good, on my rtx2060 6gb of vram for schnell nf4 i get 27 sec, i wonder why you don't get a faster speed for that since you have a better card. For dev nf4 i get 1min and 40sec, i have been using auto default
@@pixaroma As well as changing the default swap method and swap location, I changed the image dimension to 1024*1024. Changing my swap method back to queue, I get a render time of 1 min 22 seconds for dev-nf4-v2. I think you just have to play around with the options to see what works best on your system.
1 min 35 sec sounds insanely long to me. I got i9 13900k 128gb ram and either 4090 or 4070ti super to compare, so yeah I understand the hardware difference but mine takes about max 20 seconds to render an image.
@@moelleunbelievable 1-2 minutes render time is typical for the 3060 using the dev model. If you check the comments, you'll see that some people's systems are taking over 3 minutes to render an image with similar or superior hardware. I just updated forge and rested. dev is now down to 1 min 20 sec and schnell is down to 21 sec. For most images, the schnell(fast) model is fine. Later model cards will of course be faster, but they have a poorer price/performance ratio.
What could be wrong here? at 2:33 in the video, on his checkpoint, he has a model, while I don’t have any model even though I did the same steps he did , pls i really need help
Maybe you don't have long path activated in windows and wasn't able to download it, you can download it manually , there are a lot of modles on civit ai
tnx alot i learn alot from u.
Thank you!! Nice and easy to follow.. got it working
The great video, the great forge
12:13 I have a 2060 with 12 GB, and still, making the cat image in 4 steps with that model takes me 23 seconds. Is that normal? The settings are the default ones from Flux. And if I go a few seconds without making images, it can take up to 40 seconds when I resume
depends on your system, i have also 2060 with 6gb but i have 64gb of vram and the forge is on fast ssd. So are many things that it takes in account, and first time is slower, so it might, not sure, or maybe is a bug and in a few days if you update maybe works ok, is hard to tell
I'm suprised you were able to get flux to work immediately. I get AssertionError: You do not have CLIP state dict!
which seems to imply I need to install other things for the model to work.
Make sure you are using the right t5 and clip models, they changed the interface after i done tutorial, so you need the right clip models for the right flux model
Thanks for this! - The new updates were pretty broken :D
Keep.looking for recent commits that says verified, there are a few new version since i made the video 2 days ago :) they update it every hour, hope one of them will be stable enough
Thanks, can't wait to put an upgraded video card in my computer.
Thank you for this.
Quick note if you want to revert or update torch or xformers ver you can use command in .bat file with notepad add --reinstall-torch or --reinstall-xformers(in webui folder)
Thank you ☺️ good to know
Thank you for video, very usefull.
Thank you so much!
excellent! What is the best commit to get back Train, SVD and Z123 ? thx
I used the one that i showed in video starts with 29
@@pixaroma thank you so much !
Great tutorial!
Please make a video on- flux controlnet, flux ipadapter, flux faceID or InstantID all inside Forge UI 🙏
I rarely use forge, i prefer comfy UI, i have more control, and forge still has a lot of bugs and is frustrating
can you make a run pod installation please
I only use it locally, and I never used run pod so I don't know how to do that
Great vid! Do you know if controlnet works?
Is possible to not work, i see they keep working on lora, hires fix also had some problems, each hour u see new updates so maybe look at the commits page to see what they fixed
Do you know how I can fix the error ValueError: Failed to recognize model type! my stable defusion is not working
it still has a lot of bugs, maybe you can report it here github.com/lllyasviel/stable-diffusion-webui-forge/issues as you can see it has over 700 open issues, so not sure what model you use that is not recognize, if it work with other model must be a bug recognizing that model.
thanks a bunch for the revert bat !! was having troubles with generation if the window lost focus, tabbed out etc, couldnt stop the monster!!!
Ayyyooo 🔥🔥🔥🔥
Do you have also for mac ?
No, only for windows, but check this out, seems other have installed github.com/lllyasviel/stable-diffusion-webui-forge/issues/314
@@pixaroma I will take a look. Thank you for the link.
Any chance running this on m1 mac ..
Not sure, but check the comments of this post maybe you can make it work, github.com/lllyasviel/stable-diffusion-webui-forge/discussions/270
i have an issue when i run update.bat, the file does not run on my terminal
If is first time run make sure all the files extracted right, sometimes when you extract the zip doesn't extract all files, maybe is that, but not sure what cause to not run it, maybe check their GitHub page
when i installed it, it didn't download a checkpoint for me, no idea why. Luckily I already have some in another folder.
Sometimes it happens might be with long path not being active in windows or was an error when trying to get it from hugging face or where was model hosted
Great video! Following it was nice and easy. My poor 4070 super isn't happy with it though lol
I got it to work on rtx2060 6gb of vram also, 27 sec for schnell nf4 and 1 min 40 sec for dev nf4
@@pixaroma turned out I still had automatic open. Silly me 😆
How is the 4070 super? I’m thinking of getting that one hmm. Thanks a bunch.
@TomLally it's plenty fast enough. Images usually don't take more than a minute to produce. If using SDXL it's even faster than that.
@@travisgumm5861 amazing! Well right now I’m only using Fooocus. The whole comfyui is confusing haha. But that certainly sounds great
Can you tell me approximately how long it should take to extract the files with winrar in the forge folder
A few minutes depending on your system
@@pixaroma I just don't understand. I have a rtx 2080 ti graphics card.,ryzen 9, 32mgs ram. Do you have any idea what my next steps should be.
@@stephenzadach1870 maybe try to download again and extract. Also is faster on a SSD, try on different folder maybe, I never had problem installing so not sure what could be the cause on your system
the most problem i have with restriktions i want to yous this for Books, Gameplot/story books. so i need to generate fighting scenes as well...
Train maybe a lora with the types of images you want to generate, or search for different models on civitai
thank you for this, only issue for me is that lora's dont work for me
i get this issue, how can i fix it?
Patching LoRAs for KModel: 74%|███████████████████████████████████████▏ | 225/304 [00:34
First of all from what i know they still have problems with lora, and a few other things not sure if they are fixed. then usually when say out memory is because you don't have enough VRAM on your video card. So is your card good enough? That could cause some problems. But also the version, try update every day and see if you can get a version that work, i see updates every hour adding or fixing something on the commit page like i showed in the video
@@pixaroma thank you for the response, yeah i have the 12gb version. artwork generates fine without lora but when i try use a lora, it breaks haha
@@BombXXplosive i saw some saying the flux lora doesn't work yet with forgez only normal lora with other models, so that might be the cause, you should be able to use lora at 12gb
@@pixaroma ahhh damn ok!!! So I’ll need to use it on comfy then. Thank you
Hey can you give me info about the controlnet i should use with this ?
For the flux models you should wait a little so the flux control net model are getting better, maybe they get a union version like on sdxl
Does it work with deforum ? Or animate diff?
I didnt try it, but there are still things that doesnt work, if you look at commits like every hours or even minutes there is a new version, so they keep fixing and adding new stuff, eventually all will work but probably takes some time github.com/lllyasviel/stable-diffusion-webui-forge/commits/main/
@pixaroma thanks man.. I'll wait Abitur more before i install it..
Thanks for the nice videos !
Thank you for this, I needed the discussion on samplers. (so far DPM++ 2M / SGM Uniform seems best for speed+quality!)
I'm running on a 4070 (12GB vram) and 64GB memory - with the full DEV model and T5xxl_fp16 - and I get between 3.2s/it to 6.5s/it (so renders take 2-3 minutes each, but at really amazing quality! Strangely resolution isn't a huge impact on speed, obviously steps are.) So yes, ForgeUI can run the full DEV FLUX.1 without a 4090!
rare, i have 2060 6GB VRAM, and i get faster renders,,,,,, ? why? i can schnell a 4 steps photo in 20 seconds and DEV a photo in 1:20 minutes - 20 steps and im using the bigger DEV fp8 model, - might the sampler you are using is slower...... i use SIMPLE instead of SGM UNIFORM, maybe thats it.
I didn't have any luck with Samplers other than Euler... Maybe with ...NF4...V2 the DPM samplers work?? That would be great! Good details! Thx!
@@GenoG DPM++ 2m is fine, but SIMPLE, not KARRAS. Karras sometimes gives grey results. EULER is nice on 8 steps schnell or 22steps DEV,.
I have an RTX4060 with 32GBs of Ram. Just curious if I should go with nf4 seeing as I have the specs, or would the quality still always be best with the nf8? Would there really be a notable difference?
try both and see, depends on preferences, sometimes it is sometimes not so much
Thank you!!
i was told that flux is what i was looking for to try to get a consistent character in the same pose but multiple different camera views. do you know if this is possible and why someone recommended flux?
I saw in comfyui someone using it, not sure in forge, flux is better in prompt understanding so probably you can get more characters on one image in different angles only using good prompts. you can also check this for comfyui www.reddit.com/r/comfyui/comments/1elfcef/flux_consistent_character_sheet/
I have an error "You do not have CLIP state dict". Followed everything you did. Please help.
they changed some things in the interface, make sure you have the right clip models, check this post github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050
Can i delete stable diffusion after i downloaded flux?
You can have any model you want yes, you can delete and leave only the models you use
Can we use 2x GPU's if so anything to change?
check this discussion on reddit www.reddit.com/r/StableDiffusion/comments/1e3cn7m/use_2_gpus_in_stable_diffusion_forge/
awesome!
is flux new time paid? so how long can we use it this way until
I didn't get the question but the flux 1.0 the one from video is free to use the output the generated image
For the new one flux 1.1 pro is only available online with api so no download for that one.
Can you make a tutorial video on how to install reForge?
Since with Comfyui i dont use forge anymore, I have all the new things in AI world first in comfyui and is more stable and gives me more control. I try to make comfyui workflows to look more simple so beginners can still use it
can you paste the bat command? i want to use that version
I saved the bat in google drive drive.google.com/drive/folders/1bS-6HdLl5AH3Rbd2wHUm_nILUOnu9hmJ?usp=sharing
Create a bat file any name you want, something like
rollback.bat
and add this text inside
@echo off
set PATH=%DIR%\git\bin;%PATH%
git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
pause
When you run that rollback.bat it will load that specific version
Then create a
rollforward.bat
and add this text inside
@echo off
set PATH=%DIR%\git\bin;%PATH%
git -C "%~dp0webui" checkout main
pause
@@pixaroma you know how to reset the setting as well? cause i think the setting of the "newest" version ruin the generation
@@muppetboneheadxdd2606 I dont know, only if you try to do it manually adding the settings like in the video. I haven't used forge for a while now since I switched to comfyui...
Will lora models work for flux?
I think it doesn't work yet for forge, i saw some using lora in comfyui but for forge with nf4 version at least they said didn't work yet, you can try updating forge from time to time maybe it gets fixed
Is there any quality difference between fp8 and nf4 bnb v2?
Only on some prompts, very subtle, fp8 has more details on some generations, so i prefer fp8
What is the difference between auto1111 and forge now
Forge is faster, interface is similar but they added more stuff to forge, and they still add, so it still has some bugs
tried to get the 5.95 GB version of flux1-schnell-nf4 but it doesn't work, wonder why?
Not sure, only the bigger size worked for me the one I used in the video, probably the will make it work eventually but not sure how smaller they can make it without loosing quality
@@pixaroma it's a mystery.
Great guide! Followed all steps but flux dev on default settings is crashing during model unload. Any ideas whats going on? 4070 super and 32gb of ram
Thanks!
changing virtual memory paging file size from 16gb to 32gb seems to fix the issue!
Glad you figured it out, sorry for the late reply it was night here. Did that happen with all versions of dev or you tired the full regular version
Can't even download the first link. When the download reaches mid way it stops and says "not available on the website"???
Maybe is something with the internet or github, try later maybe