I've watched some other channels focused on stable Diffusion, and just wanted to say I appreciate how you don't fill your videos with "funny" gifs and clips, and don't waste time explaining Windows basics like how to use file explorer, unlike those other channels. Keep it up
When i watch this guy i have that feeling that hes a nice guy but he can snap any second and do something nobody will ever expect! it gives me that feeling lol
Beautiful demonstration of the img2img tab! I've been using the sketch tab to draw new fingers to fix hands. Like Seb shows here the denoise value needs to be tweaked every time but once it clicks you can actually add all kinds of things and even fix hands :)
Thanks for all your splendid tutorials. Although I am considering myself as very advanced with working in SD. Every time I watch your videos I get some new ideas. Also your default negative works extremely good with my checkpoint Colossus 2.03.
22:20 I've found that if you want to increase resolution, prompt engineering with "highly detailed", "perfect focus" and "closeup details" gets the models to output finer details even if you're generating non-closeup images.
@@scrung Yeah, adding new keywords or tweaking existing words to push AI to generate improved image quality is called "prompt engineering". I think it started as a joke but it's commonly used seriously nowadays.
Seb is an absolute rockstar. I love his way of explaining, but the most important thing for me are the dad jokes. I hope youcan do something on a few advanced methods with higher resolutions too. My minimum output is around 4k and i feel the detail attainable gets way better with increasing resolution. Also - waiting times, i'm used to wait 3-5 min. on an image with 2 or 3 ControlNet instances and Roop on my 4090. My fav. workflow atm is previz with PS beta/Firefly, then feeding that into A1111 and start bouncing. Firefly is so amazing to clean up little things and finish an image while it's terrible finding a creative solution and closing in on an idea.
You got a pretty nice gear man!! I'm in this channel since day one and I am so proud of you my man! Keep doing this amazing content for the community! By the way, those bad jokes, always got me hahahaha
Wow, thank you! You guys and gals are the real mvps out there! You're a real rockstar for hanging in since the early days. I'm surprised you stayed when the mic and video quality was so low back then 😅😘🌟
0:21 -- The "joke" you’re referring to appears to involve wordplay or a pun, but its meaning isn’t immediately clear. The statement _“a sock takes five tails”_ is particularly confusing, as there’s no widely recognized link between socks and tails. It’s possible that we’re missing some specific context or cultural reference that would help clarify the joke’s meaning. Anyway! Thank you for this informative and useful video tutorial.
Incredible tutorials, congratulations! A tutorial would be great teaching how to transform a drawing into a realistic photo with the same environment as the drawing! Thanks for sharing
I opened this one on my tv and I was like WAAAT is this new setup by panavision or it's just new hollywood DOP? Yes, it's visible right away. I love it.
I'm very happy you feel that way. I spent a lot of time researching to be able to get something like that! Anamorphic lens on a Lumix S5iix at 6K open gate.
Very nice new settings and background. Why did the minimalist TH-camr's background get jealous? Because it felt like it was being "framed" out of the picture!
I've always had major performance issues with inpaint sketch. Sketch and normal inpaint work fine, but as soon as I stick an image in inpaint sketch the ui just gets extremely laggy to the point of locking up. No idea why.
Great video! I think it would have been better if it included a bit of ControlNet stuff to show how to give even more accurate control for generated shapes.
I’d love to see a video about reinstalling A1111 and changing to other versions and how to do git pulls and things like that. Every tut on these assumes a really high level of experience with these things. In particular, none of my inpainting / img2img etc work and I’ve seen that this may be a version issue.
I cannot get sketch to work to add glasses or a different lip color... The whole image is changing... I'm using the same model you're showing and the settings all look the same... Any idea what I'm doing wrong?
Thanks for the great video, I am using sd_xl_base_1.0 and you seem to be using delivered_v2 is the one you are using open source as I can not find it on hugging face? Which one you find more powerful? Thanks a lot.
It's available for free on civitai. Custom 1.5 models are probably more finetuned at this point. SDXL is in its infancy and will probably be better over time with custom models being trained on it.
hey how to convert cartoon character into realistic one, I mean it will not do a slight changes, can it understand the character features and generate complicated real life character?
I do not see any styles, I downloaded some, I think the one mentioned in your comments restarted and refreshed but if I push the down arrow it is not opening and I do not see styles
Is there a way to put the img2img generation through Hi-Res Fix like in text2img? Using the upscaler in img2img to get the same resolution I get in text2img does not look look that great.
Problems with Model standin in Water, Help needed: I tried to modify a foto with a Person standing in a dungeon like room - I wanted to add some 30 cm water covering the floor so that she is standing in this water, the rest of th fotot without modification. I masked the lower area with the Inpaint tool and used the prompt "dirty water" or "woman standing in dirty water". It almost works fine with a good result, but is creating some ugly artefact in deforming the legs in an area above the masked zone. How can I get rid of this artefact? I was using Epicrealism as model. Yours Uli
How about the batch mode? I was going to use img2img upscaling on some images then I realized I can't use the original prompts for each file since there is only one prompt input.
For some reason I can’t get it to work with an abstract image I’m testing I’m trying to use light beams that travel around the image but when I try to this workflow it just results in smudged semi transparent line and never an actual laser beam image, is there something I need to do here as I can’t figure this out
problem I found with img2img was the resource requirements increased dramatically and the model starts getting upset about my aging 1080ti's 11gb of vram lol
How often do you toggle original/fill? If I want to introduce a new item to the foreground or background…how do I approach this? Inpaint sketch, fill, .7+ denoise?
What if your art isn't humans or sceneries. More like a bunch of lines in varying 3d space to create an image? Would the AI understand how to create a new one?...If not, is there any software that does or can take in the arts to create something but still using that style?
This was helpful, thanks for that. But with roop and eyemask extensions/scripts, you can do much of what you were doing much more quickly and accurately. Still a good review on image to image.
Hey Sebastian, I have a quick question here. How to ensure crystal clean transition when doing inpaint to half section of the face. For example, I am creating half human. half cartoon face so the trasition is not very clear, I mean the interaction line from where the new prompt begins, Sometimes lips are distorted and so on. Any advise please
Why when i use img2img and set it to 0.5 and the image almost looked different even the background colors changed? But yours can maintain the similar image on 0.6?
you can keep the original image to go to sketch draw the glass, and after that back to inpaint -> painting glass with a prompt "yellow glass" and wait for the result ^^
I wish there was a tool to pick a color from the image for sketch mode as it's often difficult to pick the right colors, it often ends up with flashy items like your glasses or eye here
On img2img Tutorial part. If you want to change and keep high denoising, you can use () to emphasize. For example: ((man)) with blue hair. Best results with Inpainting.
You can also use a scale directly, e.g. (man:1.21). It cuts down on the parentheses. And you can select the word or phrase you want to emphasize and press ctrl+up to increase the emphasis and ctrl+down to lower it.
Just so I'm not misunderstanding when you say your image prompts are free, is it just because it's an old video and now they are not actually free anymore? It's asking me to subscribe. I don't mind subscribing I just want to make sure. Maybe you have some free and some are locked behind a paywall or are they all locked behind a paywall now? I notice this on a lot of your older videos. Thanks
Thanks a lot! Appreciate all your work and knowledge you are sharing with us. But I am constantly struggling keeping those very useful workflows, effects of parameters, etc in my brain when I am just working with automatic1111 in my spare free time once or twice a week for an hour 😅 would you think about writing guides too? Is there maybe a knowledge base tool as an extension for automatic1111 whoch could help me keeping tool and best practices closer together?
New to the channel. Absolutely love how detailed your guides are. Been binge watching them recently on repeat, lol. I know it might be asking a lot, but have you ever considered maybe including the base images that you play around with for people to try and follow along? I know ultimately it’s going to come out different. But as I listen to some of these videos while doing housework, I find myself thinking “okay when I get to my computer, I’m going to try and find a soulmate image and replicate your steps exactly to teach myself, so that I can then apply it elsewhere in my open projects” Just a thought! You already provide an abundance of free resources and that’s more than enough. Anywho, like and subscribed (didn’t realize I hadn’t done that yet)~ looking forward to additional content on your channel once I get these fundamentals down :D
It’s a common thing that when following a tutorial you can do something then look at the guide you’re following to try and confirm that what you have made looks something similar like what you were instructed to create. I understand that the concepts are universal. The point of the message was for when learning/following his guides step by step for the first time. Not for when applying it to my own unguided work.
I've been following all of your tutorials of Stable Diffusion and I found them to be really helpful, but when switching from img2img to sketch to inpaint to inpaint sketch, I often get an error, the mouse makes color inputs on it's own, or the image doesn't load at all. Do you have any idea what may be causing this?
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
👋
How can i install it?
@@neprhes
Here you go, you don't even need to pay.
prompt : RAW candid cinema, 16mm, color graded portray 400 film, remarkable color, ultra realistic, textured skin, remarkable detailed pupils, realistic dull skin noise, visible skin detail, skin fuzz, dry skin, shot with cinematic camera
Negative prompt: NSFW, Cleavage, Pubic Hair, Nudity, Naked, Au naturel, Watermark, Text, censored, deformed, bad anatomy, disfigured, poorly drawn face, mutated, extra limb, ugly, poorly drawn hands, missing limb, floating limbs, disconnected limbs, disconnected head, malformed hands, long neck, mutated hands and fingers, bad hands, missing fingers, cropped, worst quality, low quality, mutation, poorly drawn, huge calf, bad hands, fused hand, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, missing fingers, fused fingers, abnormal eye proportion, Abnormal hands, abnormal legs, abnormal feet, abnormal fingers.
They are not free
@@thegames6391 I noticed that as well. Maybe they were free when this was released? Paywall now :(
I've watched some other channels focused on stable Diffusion, and just wanted to say I appreciate how you don't fill your videos with "funny" gifs and clips, and don't waste time explaining Windows basics like how to use file explorer, unlike those other channels. Keep it up
Believe or not, the one you mentioned were Ai Works.😂
ayo nothing wrong with a couple of memes here and there!
lol I hate those channels that put dumb meme video clips in.. dad jokes are better
@@CoconutPeteyou dissing on channels like fireship and bycloud? bro them channels r lit yo
I always look forward to your videos your like the Bob Ross of AI art.
Wow, thanks! Glad you like the videos 😊🌟
great comparison!!
ahhhhh the production quality has come so far my guy!
Pretty sweet eh? Did you see the little text intro too? I even made the beat for it 😅
@@sebastiankamph Yeah super clean!
Great content in the video. One of the few youtubers I listen to in 1.75 speed.
Wonderful Tutorial! Who would expect that Noise has nothing to do with removing noise. Now I understand a lot more. Thanks!
I think they need to rename some of the settings to make them more intuitive
You are my most favourite YTer to learn from about SD
That's very kind of you, thank you very much! 🤗🌟
I always think of the noise level as the level of squinting my eyes -.-
Hah, that's clever
When i watch this guy i have that feeling that hes a nice guy but he can snap any second and do something nobody will ever expect! it gives me that feeling lol
Wow! I think the quality of your new cam is amazing.
A nice camera for a nice guy :))
Thanks for the valuable tips
Glad you think so! I wasn't sure if I should go for it, but it seems most people think it's an improvement 😊🌟
Beautiful demonstration of the img2img tab!
I've been using the sketch tab to draw new fingers to fix hands. Like Seb shows here the denoise value needs to be tweaked every time but once it clicks you can actually add all kinds of things and even fix hands :)
Thanks for all your splendid tutorials. Although I am considering myself as very advanced with working in SD. Every time I watch your videos I get some new ideas. Also your default negative works extremely good with my checkpoint Colossus 2.03.
I'm happy to hear that! We probably all have some little neat tricks that we could learn from each other 🌟
The new setup is very cool. However, it is nice to see your video anyway it is really helpful
Thank you kindly! Glad it helped you. What feature in img2img do you use the most? 😊🌟
This is absolutely one of the best tutorials I've seen for SD! Thank you Herr Kamph! Sry for being a "shart" before! :) Subbed!
Thank you kindly! Hope you'll enjoy 😊🌟
Now I know why I wasn't getting certain results I wanted. Thank you!
Production quality looks so good my dude
Oh why thank you! Very kind! 😊🌟
22:20 I've found that if you want to increase resolution, prompt engineering with "highly detailed", "perfect focus" and "closeup details" gets the models to output finer details even if you're generating non-closeup images.
prompt engineering Lol
@@scrung Yeah, adding new keywords or tweaking existing words to push AI to generate improved image quality is called "prompt engineering". I think it started as a joke but it's commonly used seriously nowadays.
Seb is an absolute rockstar. I love his way of explaining, but the most important thing for me are the dad jokes. I hope youcan do something on a few advanced methods with higher resolutions too. My minimum output is around 4k and i feel the detail attainable gets way better with increasing resolution. Also - waiting times, i'm used to wait 3-5 min. on an image with 2 or 3 ControlNet instances and Roop on my 4090. My fav. workflow atm is previz with PS beta/Firefly, then feeding that into A1111 and start bouncing. Firefly is so amazing to clean up little things and finish an image while it's terrible finding a creative solution and closing in on an idea.
this was very helpful, thank you!
Beautiful demonstration of the img2img tab! Well done 👍👍👍
You got a pretty nice gear man!! I'm in this channel since day one and I am so proud of you my man! Keep doing this amazing content for the community!
By the way, those bad jokes, always got me hahahaha
Wow, thank you! You guys and gals are the real mvps out there! You're a real rockstar for hanging in since the early days. I'm surprised you stayed when the mic and video quality was so low back then 😅😘🌟
god bless you bro, you are going straight to heaven
Fantastic Video, Thank you for your efforts to make this wonderful tool more accessible to newcomers!
Sebastian you're amazing bro, thank you so much, I really appreciate all your hard work making these videos.❤
Thanks Sebastian for this amazing tutorial! I finally got some clear understanding of img2img. Thanks for all the content.
0:21 -- The "joke" you’re referring to appears to involve wordplay or a pun, but its meaning isn’t immediately clear. The statement _“a sock takes five tails”_ is particularly confusing, as there’s no widely recognized link between socks and tails. It’s possible that we’re missing some specific context or cultural reference that would help clarify the joke’s meaning. Anyway! Thank you for this informative and useful video tutorial.
Incredible tutorials, congratulations!
A tutorial would be great teaching how to transform a drawing into a realistic photo with the same environment as the drawing!
Thanks for sharing
Great suggestion!
New setup looks good!
Thank you! Appreciate it 🥰
Excellent tutorial. Thank you!
I opened this one on my tv and I was like WAAAT is this new setup by panavision or it's just new hollywood DOP? Yes, it's visible right away. I love it.
I'm very happy you feel that way. I spent a lot of time researching to be able to get something like that! Anamorphic lens on a Lumix S5iix at 6K open gate.
thank you very much sir....i really learnt a lot and enjoyed your tutorials....keep bringing more content 😇
Awesome video, Thank you Seb.
Glad you liked it! 🌟
@@sebastiankamph ❤
really good tutorial, thanks!
this channel is great!
hey. good tutorial! i got one question, how to make room images stay consistent if i want to make 2 images inside a space but with different angles?
is it just my imagination or is a prompt in text2image with controlnet image enabled more powerful than an img2img with text prompt?
Thank you for another great tutorial. I really enjoy these
You are so welcome! What would you like to see more of?
Very nice new settings and background. Why did the minimalist TH-camr's background get jealous? Because it felt like it was being "framed" out of the picture!
Thanks! Oh, on topic, very nice!
Very well explained. Thank you very much! 😀
Brilliant job
Amazing video, thank you!
Thanks Seb. Excellent tutorial. K 🙂
My pleasure!
Your tutorials are great! 👍
I appreciate that!
How to add custom styles in SD, like you have on your right drop down list, please advise, thanks?
See pinned comment. Styles.csv
@@sebastiankamph is it from this video or other, coz cant find it?
what model did you use on this?
very good video love the slow speed you use to talk.
It was probably the Deliberate v2 when I did this.
Thank you so match, you a great teacher, super super super.
🌟🌟
I've always had major performance issues with inpaint sketch. Sketch and normal inpaint work fine, but as soon as I stick an image in inpaint sketch the ui just gets extremely laggy to the point of locking up. No idea why.
Same here. Always happen when I click to send image from the UI. If I drag and drop the image from the file then it goes fine.
Great video! I think it would have been better if it included a bit of ControlNet stuff to show how to give even more accurate control for generated shapes.
I’d love to see a video about reinstalling A1111 and changing to other versions and how to do git pulls and things like that. Every tut on these assumes a really high level of experience with these things. In particular, none of my inpainting / img2img etc work and I’ve seen that this may be a version issue.
I cannot get sketch to work to add glasses or a different lip color... The whole image is changing... I'm using the same model you're showing and the settings all look the same... Any idea what I'm doing wrong?
Can u advice me I need program convert photo to animation photo
Great instruction!
Thanks for the great video, I am using sd_xl_base_1.0 and you seem to be using delivered_v2 is the one you are using open source as I can not find it on hugging face? Which one you find more powerful? Thanks a lot.
It's available for free on civitai. Custom 1.5 models are probably more finetuned at this point. SDXL is in its infancy and will probably be better over time with custom models being trained on it.
helped a lot thanks
I only have black color when using inpaint sketch.... how do I get the color changer?
How Can I get to the main page on stable diffusion?
Thanks for your help! Learned a lot from you.
Glad to hear it, Konrad! 😊🌟
can you zoom in at all?
Great tutorial. Thank you.
hey how to convert cartoon character into realistic one, I mean it will not do a slight changes, can it understand the character features and generate complicated real life character?
Thanks,How did you get the "resize to/ resize by" option
I do not see any styles, I downloaded some, I think the one mentioned in your comments restarted and refreshed but if I push the down arrow it is not opening and I do not see styles
You are amazing,I subscribe
Is there a way to take a drawn image and convert it to a real life image with img2img and vice versa?
Thanks master, very useful ant didactical.
What service do you use for SD? From one of your videos you said that you don't run it locally any more.
Is there a way to put the img2img generation through Hi-Res Fix like in text2img? Using the upscaler in img2img to get the same resolution I get in text2img does not look look that great.
You can run an image from img2img into img2img again and just raise the resolution. Similar concept.
Problems with Model standin in Water, Help needed: I tried to modify a foto with a Person standing in a dungeon like room - I wanted to add some 30 cm water covering the floor so that she is standing in this water, the rest of th fotot without modification. I masked the lower area with the Inpaint tool and used the prompt "dirty water" or "woman standing in dirty water". It almost works fine with a good result, but is creating some ugly artefact in deforming the legs in an area above the masked zone. How can I get rid of this artefact? I was using Epicrealism as model. Yours Uli
How about the batch mode? I was going to use img2img upscaling on some images then I realized I can't use the original prompts for each file since there is only one prompt input.
For some reason I can’t get it to work with an abstract image I’m testing I’m trying to use light beams that travel around the image but when I try to this workflow it just results in smudged semi transparent line and never an actual laser beam image, is there something I need to do here as I can’t figure this out
Great guide! What GPU are you using?
Rtx 3080
Naise new intro :)
Got to step up my game! You liked it?
@sebastiankamph O yea, it's clean and simple, and gives the channel a professional feel
problem I found with img2img was the resource requirements increased dramatically and the model starts getting upset about my aging 1080ti's 11gb of vram lol
What is the setting to get faster results??
How often do you toggle original/fill? If I want to introduce a new item to the foreground or background…how do I approach this? Inpaint sketch, fill, .7+ denoise?
very helpful thank you
Hi sebastian, Im having a problem after generating the result image becomes darker, anyway to solve this? I don't see this problem in your video
I wonder what video card do you have which can generate a full set of four 768x768 images in just 11 seconds.
RTX 3080
Dude you said the styles were free to download from your description but they're not?
What if your art isn't humans or sceneries. More like a bunch of lines in varying 3d space to create an image? Would the AI understand how to create a new one?...If not, is there any software that does or can take in the arts to create something but still using that style?
This was helpful, thanks for that. But with roop and eyemask extensions/scripts, you can do much of what you were doing much more quickly and accurately. Still a good review on image to image.
Hey Sebastian, I have a quick question here. How to ensure crystal clean transition when doing inpaint to half section of the face. For example, I am creating half human. half cartoon face so the trasition is not very clear, I mean the interaction line from where the new prompt begins, Sometimes lips are distorted and so on. Any advise please
Good video. Thanks
Really enjoy your tutorials! Commenting for the algo ;)
Thank you kindly. Real mvp for helping the algo! 😊🌟
Thank you very much! Definitely helped me a ton!! 🙌🙌
Glad it helped, my friend! Good to keep seeing you in the comments section. Makes me happy! 🌟😘
@@sebastiankamph 😊
Why when i use img2img and set it to 0.5 and the image almost looked different even the background colors changed? But yours can maintain the similar image on 0.6?
you can keep the original image to go to sketch draw the glass, and after that back to inpaint -> painting glass with a prompt "yellow glass" and wait for the result ^^
How do I download the software into a mac?
thank you!
You're welcome!
What’s your hardware setup Sebastian?
Rtx 3080
May i get the link ?
What website or program do you use..
Stable diffusion. a1111 and fooocus mainly.
I wish there was a tool to pick a color from the image for sketch mode as it's often difficult to pick the right colors, it often ends up with flashy items like your glasses or eye here
On img2img Tutorial part. If you want to change and keep high denoising, you can use () to emphasize. For example: ((man)) with blue hair. Best results with Inpainting.
You can also use a scale directly, e.g. (man:1.21). It cuts down on the parentheses. And you can select the word or phrase you want to emphasize and press ctrl+up to increase the emphasis and ctrl+down to lower it.
@@phizc yup, this too :)
Opening shot looks very good, very professional looking. May it ever drive the metrics. 😀🙏
Thank you kindly! I hope so too 😊
@@sebastiankamph :) :)
what GPU do you have to generate images that fast ??? :o
RTX 3080
Just so I'm not misunderstanding when you say your image prompts are free, is it just because it's an old video and now they are not actually free anymore? It's asking me to subscribe. I don't mind subscribing I just want to make sure. Maybe you have some free and some are locked behind a paywall or are they all locked behind a paywall now? I notice this on a lot of your older videos. Thanks
You are correct, they used to be free and they are not anymore. Sorry for the confusion.
Thanks a lot! Appreciate all your work and knowledge you are sharing with us.
But I am constantly struggling keeping those very useful workflows, effects of parameters, etc in my brain when I am just working with automatic1111 in my spare free time once or twice a week for an hour 😅 would you think about writing guides too?
Is there maybe a knowledge base tool as an extension for automatic1111 whoch could help me keeping tool and best practices closer together?
New to the channel. Absolutely love how detailed your guides are. Been binge watching them recently on repeat, lol.
I know it might be asking a lot, but have you ever considered maybe including the base images that you play around with for people to try and follow along? I know ultimately it’s going to come out different. But as I listen to some of these videos while doing housework, I find myself thinking “okay when I get to my computer, I’m going to try and find a soulmate image and replicate your steps exactly to teach myself, so that I can then apply it elsewhere in my open projects”
Just a thought! You already provide an abundance of free resources and that’s more than enough.
Anywho, like and subscribed (didn’t realize I hadn’t done that yet)~ looking forward to additional content on your channel once I get these fundamentals down :D
It’s a common thing that when following a tutorial you can do something then look at the guide you’re following to try and confirm that what you have made looks something similar like what you were instructed to create.
I understand that the concepts are universal. The point of the message was for when learning/following his guides step by step for the first time. Not for when applying it to my own unguided work.
I've been following all of your tutorials of Stable Diffusion and I found them to be really helpful, but when switching from img2img to sketch to inpaint to inpaint sketch, I often get an error, the mouse makes color inputs on it's own, or the image doesn't load at all. Do you have any idea what may be causing this?
I'm having trouble finding the stable diffusion everyone is using in their videos, can someone give me a link?
th-cam.com/video/kqXpAKVQDNU/w-d-xo.html