I don't have the buttons underneath Generate. Only have a delete button and the blue one. I reinstalled many times and still not there. Did they move the button?
I just reinstalled automatic 1111 the other day after not using it for a long time, ive been trying to find the video that helped me the most before all the new updates, and this is the video I was looking for to help set up my settings and extensions as last time. I appreciate how you go over everything, thank you for your help 😊
Yo...You the man!!!..i'''m a noob (@ 67 years of age)...just jumping in to find out that the installation process should be called " Unstable conffusion""...but now that I''ve finally installed it I look forward to trying it out. I am a graphic artist and a computer nut..I started using DOS back when DOS was cool..but seriously I just wanted to thank You for working so hard on this info and sharing it with us all...Great presentation and concise information...Thanks again!
Thank you kindly! Appreciate the comment. I'm a tad younger but I also remember DOS from back in the days. I was probably 10 or 11 when I wrote little .bat files that just redirected to another with a text output 😅
i'm subbing to you because you're explaining properly and good and you're touching on a lot of small details that seem insignificant but combined together paint a much clearer picture on what's going on and i appreciate that,
Thanks so much G, so many people try to explain this stuff but you understand really how to take the explanation apart and explain it how it should be explained to total new commers! 💯💯🚀🚀
I took a break from Stable Diffusion for a few months and after picking it up a few days ago and fumbling with it I decide I should review the basics. I go looking for the monst recent tutorial I can find (since in the AI world anything older then a month tends to feel put of date.) And find this just a few hours old, funny how that works out sometimes.
This is the best video so far to go from absolutely noob to intermediate. Just need to play around the settings he mentions and that's it. Hugs from distance. Subscribed, commenting, and liking
the best video tutorial on stable diffusion I've seen so far. I will check the other ones. One thing I am looking for is how to convert architecture sketches into real images and how to organize setting in stable (rundiffusion) I just started to use. Thumbs up for all the things already provided and for the knowledge shared.
picture me, chilling in my chair with my gown on can of cidre with my pet rat max nibbling on some food next to me. Just switched from the calmest and most chill tutorial to the second vid. Clip volume goes up by 300% scaring the shit out of me. not that im complaining, that was super funny to me
I love your tutorials, they are so helpful and professionally made. I feel like I'm learning as much from you as I would from a paid course! I had a question though, at 3:32, I don't have the 'apply selected styles to the current prompt' button in my Stable Diffusion. Also at 17:13, I don't have a depth model option.
Hi, newbie here. Your channel and tutorials so far have been a huge help and easy to follow. I subbed to your Patreon. Keep up the good work, Thank you sooooo much! I am so excited to keep learning AI.
Thank you for this, it was extremely helpful and well explained. I was at my computer with a paper and pen taking notes as if I was back in college, haha!
First, thanks for the videos. Your time spent is appreciated. My questions are hardware requirements. My ONLY understanding is 4-6 Gig VRam min, and higher video processor is obviously better, but what generates 512x512x 4 images in 1 min? What generates 4k x 4 in 1 min? How, if it does, CPU and RAM come into the equation? I've not seen any SD Local Install vids addressing hardware, maybe I'm looking in the wrong places. Suggestions? edit: "standard" image size is 512x512, not 540x540🤦♀
Thank you! It's all about the vram and speed of your gpu. You can almost run it on any old potato as long as you have a good gpu. My RTX 3080 10gb generates a 512x512 in a couple of seconds. I tried 8k with tiles, that took me about 30 minutes.
@@sebastiankamph Thanks for the detailed reply!🙂 My current build is incredible old and I'm going to rebuild from ground up, save the SSD's, case, and PSU. Thinking 3070 with mid range MB, RAM, CPU should do for now. Thanks again.
GTX 1070 can generate 1 picture in around 20 secs (512 x 512 ; 25 sampling steps, Euler A, CFG scale 7) Without "styles" (negative & positive) applied its around 1-2 secs faster Anyone with AMD card here, please tell us how fast are those GPUs!
You are so helpful I have a question on my Stable Diffusion checkpoint, some models I downloaded and installed I can't use. I keep getting errors. Also, I can't change them without shutting it down and reloading it. If I do get it the change, it gives me an error. Here is one of the errors: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Time taken: 0.5 sec.
Great! I've been using Prompt Chan AI, which is great but not as good as what Stable Diffusion looks to be. BTW the Stable Diffusion free web version is junk compared to others like MidJourney or Soulgen. The best web AI I found is Prompt Chan AI, but I wanna start using Stable Diffusion.
5 หลายเดือนก่อน
Can u please share the link for the sampler/steps comparison image? (the one with the dogs) Thanks for the video.
Thank you Sebastian, I have watched a couple of these videos on how to get started in A1111 and this has been the best. I have only recently began my journey into generative ai, I began using Fooocus which was nice and simple to begin with. I have dabbled an little in Comfy and now looking to play with this a little. Which out of Comfy and A1111 would you recommend as a beginner? Looking forward to binge watching your other content 🙂
Glad to hear you enjoying the content! I recommend to start with a1111 and then also learn comfy a little on the side. They complement each other very well.
Great tutorial as always thanks so much bro 🙏🏾🙏🏾 I'm thinking of buying a new pc to for the AI staff (Image, Audio and Video) can you please suggest a Good PC that works smoothly for training and other AI related things. If it's okay it would be great if you make a Video
Hey and thank you! Regarding Stable diffusion generation it really just ends up with the GPU. The rest are nice to haves for a computer of course, but SD mainly works with your GPU. I'd get anything with 12+ Gb vram if possible, depending on your budget. The more the better.
You are an amazing tutor...!, But I just notice, you take a amazing artwork with single prompts, But I use same prompts artwork is too primitive, Can you please tell me why? And again Thank you for your tutorials.
Thank you Sebastian! But my generation always fail with this note: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Time taken: 24.1 sec. ----would you please explain why and how should I solve it? Thank you!
Hello Sebastian, after the initial guide I'm loving it, very descriptive and no fill out moments. One thing, you say the Styles are free, but I have to pay pantheon to download them, or am I missing something? With other styles i got from the web I don't see the icon "Apply selected styles to current prompt", is that because of the file? Thanks and great videos, gonna watch them all.
Hi Alejandro and thank you for the kind words! I recently started experimenting with putting the styles for Patrons only as a means to be able to focus on making these videos. Initially that wasn't the case, but without it I might not be able to keep up. I will see how it develops and might change back in time. The button you ask about has been removed in the latest a1111 release. I did hear they considered taking it back though. Welcome aboard! 😊🌟
any tips on fixing eyes? and overall face xD the latest version for example don't have "restore face" and look slightly different that version in the video
Thank you for all the help, but for some reason I cannot use the Styles options. Whenever I click the arrow or anywhere around it nothing pops up and the generated images don't look good. Do you have any clue as to why this is happening?
@@sebastiankamph I had some problems with lunching SD, because I have rx 6600xt, and you know that amd gpu is so bad on making art with SD. Yes, I fixed some problems. But in future I need Nvidia gpu with more vram. About my art, I know not to much about making good design, but I leaning something new everyday ☺️. Now I'm on simple art style. Not photorealistic. Trying myself on POD.
This one is ultra nice and helped my beginning steps alot! Somehow i made the experience, that i need to use Anime stuffs to get the charakters in the postures i want them to be in and then making a depth map to make the "Realistic" one to behave like that.... dunno but i have problems with realistic humans, to for example someone jump up like martial art kicking forward, with akimba barettas in theyre hands - but with anime that works well!! - and then, like i said, with depht its easy to reproduce it to a real image.... why is it like that ? arent there realistic checkpoints which you can prompt better in ways like that ? i feel like its some kind of random with realistic one... also with negative prompts.
So, a question. I have created images to a certain model on Tensorart site. Now how to copy my work and continue on my pc? How to get that same model - face, body, hair etc?
Is there any setting or controls to increase the size of the controlnet inpainting window for more precise masking? only thing I've found is zoom the display size of text and images in my browser
Hi, love your tutorial, Can you please Share the link Preprocessor Model? its showing none on my end.while in folder of extension, it shows few Please share some link? I search in videos but cant find for a1111
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
👋
how do i download these styles? i don't see any downloadable link on the page. thanks!
@@redshift3696 ctrl f styles.csv
I don't have the buttons underneath Generate. Only have a delete button and the blue one. I reinstalled many times and still not there. Did they move the button?
It's only for people with subscription.
I installed it locally, and been messing with it on my own for about 4 days. But now it's time for me to watch these tutorials and truly learn!
I just reinstalled automatic 1111 the other day after not using it for a long time, ive been trying to find the video that helped me the most before all the new updates, and this is the video I was looking for to help set up my settings and extensions as last time. I appreciate how you go over everything, thank you for your help 😊
Glad you found it and enjoyed it! 😊
Yo...You the man!!!..i'''m a noob (@ 67 years of age)...just jumping in to find out that the installation process should be called " Unstable conffusion""...but now that I''ve finally installed it I look forward to trying it out. I am a graphic artist and a computer nut..I started using DOS back when DOS was cool..but seriously I just wanted to thank You for working so hard on this info and sharing it with us all...Great presentation and concise information...Thanks again!
Thank you kindly! Appreciate the comment. I'm a tad younger but I also remember DOS from back in the days. I was probably 10 or 11 when I wrote little .bat files that just redirected to another with a text output 😅
How come this video doesn't have more than 1k likes in four days since it was published, c'mon guys. This video is GOLD for newcomers.
Thank you, that's very kind of you 😊
Give it time
I’ve never seen such a detailed explanation of each knobs and dials on the Automatic1111. Beautifully rendered! ❤
because new users like me are viewing n reviewing the first one from time to time n not yet come to this far hhhh
You're a great teacher this is the perfect series for SD beginners
Thank you kindly :)
i'm subbing to you because you're explaining properly and good and you're touching on a lot of small details that seem insignificant but combined together paint a much clearer picture on what's going on and i appreciate that,
Thank you very much, both for the sub and the kind words 😊
Hear hear that’s why i subbed
updated! this is fantastic a friend of mine just started to learn more about Stable Diffusion, ill share this video to them
Amazing, I hope it will be helpful! 😊🌟
Thanks so much G, so many people try to explain this stuff but you understand really how to take the explanation apart and explain it how it should be explained to total new commers! 💯💯🚀🚀
I took a break from Stable Diffusion for a few months and after picking it up a few days ago and fumbling with it I decide I should review the basics. I go looking for the monst recent tutorial I can find (since in the AI world anything older then a month tends to feel put of date.) And find this just a few hours old, funny how that works out sometimes.
I got you, don't worry! 😁
This is such a great tutorial. Very well structured, and well explained. Thank you!
Glad it was helpful!
The scar and veins by your right eye make you look like a movie villain, which is pretty cool.
I was able to get everything up and running with your last video on how to install SD. Thanks, man. I like the way you speak.
Thank you, appreciate it! And glad you got it all sorted 😊
This is the best video so far to go from absolutely noob to intermediate.
Just need to play around the settings he mentions and that's it.
Hugs from distance.
Subscribed, commenting, and liking
Thanks for your new video. I always learn something new watching your videos. This time i learned about the ESRGAN NMKD Siax model.
You're very welcome and I'm happy you can expand your knowledge about generative AI art :)
the best video tutorial on stable diffusion I've seen so far. I will check the other ones. One thing I am looking for is how to convert architecture sketches into real images and how to organize setting in stable (rundiffusion) I just started to use. Thumbs up for all the things already provided and for the knowledge shared.
Thank you! Glad you enjoyed it. You'll want to use ControlNet for your usecase.
Thank you so much Sebastian, great tutorial!
This video explains it so well... I was looking for this kind of explanation. Thank you.
Happy to help! Recommend it to a friend 😊🌟
hello Sebastian, thank you very much for your tutorial, are so clear and quiet :) thanks for all!!!
That's so kind of you, thank you :)
Great tutorial, easy to follow and love the examples you included
Glad you enjoyed it!
picture me, chilling in my chair with my gown on can of cidre with my pet rat max nibbling on some food next to me. Just switched from the calmest and most chill tutorial to the second vid. Clip volume goes up by 300% scaring the shit out of me. not that im complaining, that was super funny to me
Thank you so much for making this video. It was exactly what I needed.
Happy to help! I understand Stable diffusion can be confusing at first, and it keeps updating all the time, changing how things look. Good luck!
Thank you so, so much. I really do appreciate you sharing your knowledge in such a calm and precise manner.
Glad it was helpful! 🌟
Excellent pacing and full of value. Thank you
Glad you liked it! 😊
Just want to say your content is easy to follow and very helpful in learning this! Appreciate you 🙌
Awesome! Thank you, Jeff! 😊🌟
Youre the best teacher bro, thanks a lot homie!
Happy to help!
Thank you for the best tutorials!
Glad you like them! Tell a friend 😊🌟
I love your tutorials, they are so helpful and professionally made. I feel like I'm learning as much from you as I would from a paid course! I had a question though, at 3:32, I don't have the 'apply selected styles to the current prompt' button in my Stable Diffusion. Also at 17:13, I don't have a depth model option.
Same here. I am thinking it must be extensions we didn't install yet.
Wow, thank you Sebastian, that is a very nice video for newbies, I already started everything and following the steps!
Glad it was helpful!
I usually come for the jokes and u never fail them 😂
Glad to hear you're enjoying them! If you guys keep coming back for them, I'll keep providing 😅
@@sebastiankamph i'm sold
This guy is the Rob Ross of AI art
Hi, newbie here. Your channel and tutorials so far have been a huge help and easy to follow. I subbed to your Patreon. Keep up the good work, Thank you sooooo much! I am so excited to keep learning AI.
great tutorial, thank you so much
Thank you, this was a very informative and well made tutorial. Appreciate it! ♥
Happy to help!
thank you so much! please keep on covering it. your explanations are 🚀🚀
Thank you! Will do!
From what I learned, the higher the resolution, the more sampling steps you need
Loved the first video - thank you - BUT the most basic instruction the next day is missing - HOW do I open Stable Diffusion?
Doubleclick the webui-user.bat, or make a shortcut on your desktop
i thought dall-e was cool until I saw the super insane stable diffusion images...
wow, great info here. Thank you!
Glad it was helpful! Welcome aboard 😊🌟
Great video, thank you very much.
This guide is so good 😢 thank you
Boom, fantastic as always!
Subscribed. Thank you for your videos!
Thanks for subbing! Welcome aboard the AI hypetrain 😊🌟
This is very clear and concise. I could follow along easily. Thank you!
You're very welcome!
Thank you for this, it was extremely helpful and well explained. I was at my computer with a paper and pen taking notes as if I was back in college, haha!
Wonderful! Glad you enjoyed it 😊
awesome tutorial!
Thank you!
Thank you so much for this tutorial!
You are so welcome! 😊
Learned a lot thx
sooo usefull tutorial series :)
Glad you think so!
the art of humorously delivery of AI generated jokes is just..
Thank you very much for the tutorial!
I wonder what should i fix if some downloaded models simply give a grey box result instead of an actual image?
Your tutorials are so good!
Thank you! Tell a friend 🌟
Already did ^^@@sebastiankamph
First, thanks for the videos. Your time spent is appreciated.
My questions are hardware requirements.
My ONLY understanding is 4-6 Gig VRam min, and higher video processor is obviously better,
but what generates 512x512x 4 images in 1 min?
What generates 4k x 4 in 1 min?
How, if it does, CPU and RAM come into the equation?
I've not seen any SD Local Install vids addressing hardware, maybe I'm looking in the wrong places.
Suggestions?
edit: "standard" image size is 512x512, not 540x540🤦♀
Thank you! It's all about the vram and speed of your gpu. You can almost run it on any old potato as long as you have a good gpu. My RTX 3080 10gb generates a 512x512 in a couple of seconds. I tried 8k with tiles, that took me about 30 minutes.
@@sebastiankamph Thanks for the detailed reply!🙂
My current build is incredible old and I'm going to rebuild from ground up, save the SSD's, case, and PSU.
Thinking 3070 with mid range MB, RAM, CPU should do for now.
Thanks again.
GTX 1070 can generate 1 picture in around 20 secs (512 x 512 ; 25 sampling steps, Euler A, CFG scale 7)
Without "styles" (negative & positive) applied its around 1-2 secs faster
Anyone with AMD card here, please tell us how fast are those GPUs!
Thank you very much! Very interesting and useful to watch your videos
You're very welcome! 😊🌟
Thanks for the video. When is it possible to run on Mac's ?
You are so helpful I have a question on my Stable Diffusion checkpoint, some models I downloaded and installed I can't use. I keep getting errors. Also, I can't change them without shutting it down and reloading it. If I do get it the change, it gives me an error.
Here is one of the errors:
NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Time taken: 0.5 sec.
Great! I've been using Prompt Chan AI, which is great but not as good as what Stable Diffusion looks to be. BTW the Stable Diffusion free web version is junk compared to others like MidJourney or Soulgen. The best web AI I found is Prompt Chan AI, but I wanna start using Stable Diffusion.
Can u please share the link for the sampler/steps comparison image? (the one with the dogs) Thanks for the video.
Great video, and cute doggos!
Thank you my friend! 😊
SD AI noob here... I suppose we can choose the new SD XL model and that will work fine with Automatic1111 as you've shown here to install it?
Thank you Sebastian, I have watched a couple of these videos on how to get started in A1111 and this has been the best. I have only recently began my journey into generative ai, I began using Fooocus which was nice and simple to begin with. I have dabbled an little in Comfy and now looking to play with this a little. Which out of Comfy and A1111 would you recommend as a beginner? Looking forward to binge watching your other content 🙂
Glad to hear you enjoying the content! I recommend to start with a1111 and then also learn comfy a little on the side. They complement each other very well.
good tutorial and nice voice too
Thank you, very kind!
@@sebastiankamph you’re welcome and you deserve it!
Great tutorial as always thanks so much bro 🙏🏾🙏🏾
I'm thinking of buying a new pc to for the AI staff (Image, Audio and Video) can you please suggest a Good PC that works smoothly for training and other AI related things.
If it's okay it would be great if you make a Video
Hey and thank you! Regarding Stable diffusion generation it really just ends up with the GPU. The rest are nice to haves for a computer of course, but SD mainly works with your GPU. I'd get anything with 12+ Gb vram if possible, depending on your budget. The more the better.
@@sebastiankamph Thanks for the suggestion🙏🏾🙏🏾 you're the best
You are an amazing tutor...!, But I just notice, you take a amazing artwork with single prompts, But I use same prompts artwork is too primitive, Can you please tell me why? And again Thank you for your tutorials.
It's my styles I click in. Check the pinned comment for styles.csv and also use a custom model (for example Deliberate)
You are an excellent teacher!
Thank you Sebastian! But my generation always fail with this note: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Time taken: 24.1 sec. ----would you please explain why and how should I solve it? Thank you!
I must have screwed something up in my settings. I keep changing the Model/Checkpoint, but I keep ending up with friggin' anime characters! 😅
Hello Sebastian, after the initial guide I'm loving it, very descriptive and no fill out moments. One thing, you say the Styles are free, but I have to pay pantheon to download them, or am I missing something? With other styles i got from the web I don't see the icon "Apply selected styles to current prompt", is that because of the file?
Thanks and great videos, gonna watch them all.
Hi Alejandro and thank you for the kind words! I recently started experimenting with putting the styles for Patrons only as a means to be able to focus on making these videos. Initially that wasn't the case, but without it I might not be able to keep up. I will see how it develops and might change back in time. The button you ask about has been removed in the latest a1111 release. I did hear they considered taking it back though. Welcome aboard! 😊🌟
thank you!@@sebastiankamph
any tips on fixing eyes? and overall face xD the latest version for example don't have "restore face" and look slightly different that version in the video
this is where it all began
So many videos. I can't believe it 😅
Thanks so much, this was super useful!
You're very welcome!
Thank you for these great videos!
Thank you for commenting!
Link for the NMKD Siax 4x upscaler is not working anymore, where can I download it?
Thank you for all the help, but for some reason I cannot use the Styles options. Whenever I click the arrow or anywhere around it nothing pops up and the generated images don't look good. Do you have any clue as to why this is happening?
In the video you said the styles are free to download is this not the case any longer?
Really usefull tutorial. I'm beginner and understand fine. Thank you for making amazing content for us.
Happy to help! One week away, you been doing great art since then?
@@sebastiankamph I had some problems with lunching SD, because I have rx 6600xt, and you know that amd gpu is so bad on making art with SD. Yes, I fixed some problems. But in future I need Nvidia gpu with more vram. About my art, I know not to much about making good design, but I leaning something new everyday ☺️. Now I'm on simple art style. Not photorealistic. Trying myself on POD.
thankyou so much for explaining
Glad it was helpful!
@@sebastiankamph can you make a video on sad talker eye blink and head movement
do you have a tutorial video on how to download and use stable diffusion on a Mac?
This one is ultra nice and helped my beginning steps alot!
Somehow i made the experience, that i need to use Anime stuffs to get the charakters in the postures i want them to be in and then making a depth map to make the "Realistic" one to behave like that.... dunno but i have problems with realistic humans, to for example someone jump up like martial art kicking forward, with akimba barettas in theyre hands - but with anime that works well!! - and then, like i said, with depht its easy to reproduce it to a real image....
why is it like that ? arent there realistic checkpoints which you can prompt better in ways like that ?
i feel like its some kind of random with realistic one... also with negative prompts.
thank you for the video Brother
You bet!
Thank You. I just subbed annd will follow your channel.
Welcome aboard!
Great !!! C'est génial, merci beaucoup
Do you have a tutorial about how to use stable inpaint upload?
Whats the point of upscaling if the higher resolution doesn't come with the extra detail?
Thank you.
thank youuuuu!!!
Happy to help!
So, a question. I have created images to a certain model on Tensorart site. Now how to copy my work and continue on my pc? How to get that same model - face, body, hair etc?
Is stable diffusion and automatic1111 with the use of models free to use in your company or how does it work with copyright and such?
Hello, first...great videos. But I want to ask..for some reason when I ask for puppy dog, it create 7-10 of them in different non realistic poses
Did you set a weird resolution not close to the trained one?
@@sebastiankamph Yeas, much bigger. Maybe I still don´t understand how it works :) 51*512 is pretty small and for my VN I am using 1980*1080.
muchas gracias, funciona perfecto
Thanks!
Thank you for the support, greatly appreciated! 🌟❤️
Hey do you know if rundiffusion is uncensored?
Hey Sebastian could you tell me on an approx how big is your stable diffusion on your pc.
Hello, Im not able to download the free styles .. please send any other link
Is there any setting or controls to increase the size of the controlnet inpainting window for more precise masking? only thing I've found is zoom the display size of text and images in my browser
How do you add the "SD VAE , ADD LORA TO PROMPT, ADD HYPERNETWORKS TO PROMPT" to the top of menu?
Hello, Cool video. Which are your PC Specs? Thanks
Sharing prompts and modules is awesome, getting the avg user up to speed on hardware requirements is necessary.
I use a RTX 3080. Rest of the specs are irrelevant.
I really wish I was able to use this program. Unfortunately, I keep getting an error after trying to open the web-ui user
Can this be used to make a real portrait face to a MidJourney face?
Hi, love your tutorial, Can you please Share the link Preprocessor Model? its showing none on my end.while in folder of extension, it shows few Please share some link? I search in videos but cant find for a1111