A much better solution when using "Masked Only" is to place a tiny dot of masking on or near content of the image that gives your masked region and prompt some context. What happen with Masked Only is that the image is cropped to fit just the masked part so many times it loses context to fit the new generation into. So if you want to in paint a hand, add a dot of mask further up the arm so it knows how it should be positioned or sized to match the rest of the arm. In your example adding a tiny dot of mask to the other coffee cup would have produced a better result. Simply because the cropping will include that contextual information. You have to leave in enough for the AI to work with.
Hey Seb, just a little tip. Once you're done inpainting, you can put the final image into img2img with very low denoise to remove inpainting blurs, shadows, etc. for a smoother result.
I thought of the same as I was having trouble with an image that is a painting, although img2img has very little denoise strength it still manages to change small crucial details like the eyes. I wish there was a better solution to this, since the "only mask" option leaves a much smoother surface than the rest of the image and makes it look out of place. It's not so much trouble when it's a photo, but in other kind of images like paintings the "blur" of the fix stands out more. :S
@@spanko685 It's Automatic1111, which is relatively easy to install. Just make sure you follow the steps exactly, especially when it comes to the Python version stated. Too recent is just as bad as too old, because some of the components are very finicky. There are tons of videos and websites covering it specifically, as it's the most popular and probably still the most feature rich UI.
The reason that the coffee cup doesn't fit well within the image is because the render box for the inpaint area is such a small part of the image - anything generated is done so only within the context of a) what is not denoised i.e. the original image and b) what SD can actually see (within the render box). For high denoising strength you generally want a larger render box, otherwise it's easy to lose context. But what if you only want to change a small area? No problem! The render area is created as a bounding box that contains all the inpaint area you've selected - so, you can increase it by adding tiny dots of inpaint area to the scene. If you make them very small then whatever is behind it will generally be unchanged once rendered, so only the main area you've selected will be altered - but the dots will still count towards the bounding box. In the example given, I would put one dot above and to the left of the first coffee cup, and one at the bottom and to the right of the table. That way, what is rendered will probably adher more closely to both the focus (the blur on the coffee cup) and orientation and size of the table. For lower denoising strength (0.5 or below I'd say) it will generally be able to glean the context from what remains of the original image, but for anything higher I get much better results with this method.
@@morganandreason Mask padding is exactly what this is for but it only goes so big. InvokeAI has a much better solution with an actual bounding box you can move around.
The reason you were struggling with the coffee cup is because "full image" is not just to keep the inpaint part the same resolution, but it tells the inpaint engine to look at the entire picture when drawing. So, you will get a perfectly sized cup, and correct sunlight on the cup coming from the window, for example. But with "masked only", it only looks at the area of the cup, and that's why it won't fit the scene as well.
I love you! In a week you have turned me from someone who had zero experience using AI tools to being a pro using stable diffusion. Thank you for all the amazing tutorials!
Hi, only got into Stable Diffusion a couple of weeks ago and hadn't had much luck with Inpainting, this tutorial made a lot of sense and got much better results with my first Inpaint after watching this. Thanks :)
Alright I've subbed? joined? I dont know what the term is- I'm giving myself 2 full time weeks to really pick up on stable diffusion and you're helping to launch me. Thank you.
Awesome tutorial. Understanding the difference for masked content "original" and "latent noise" helped me so much. As others had already mentioned, making the inpaint area bigger also helps. When i Inpaint Body parts like the upper torso, I often mask the start of arms and the neck as well, so that inpaint understands in which direction the body is moving.
I never imagined inpainting would be so simple, which is why I'm here. I was like nice I can draw black lines on stuff how does that help me? I genuinely thought you had to be fluent with digital painting for this. Thanks for the tutorial dude.
Fine, I'll leave a comment after watching like a half dozen of your videos. Well paced, thorough explanations, technical expertise on the matter. Ok, then, I guess I have to thank you.
This is a great tutorial, while I worked through the pain of learning this myself. I know this would have helped me hugely. Bonus, I did learn a bit more about the blur settings. Thank you very much.
Very useful video as usual. I've been struggling to get this to work. Getting the hand now. Including help from the comments in this section. Thanks folks.
Very nice video. Inpainting in SD is not intuitive but your simple, to the point instructions helped to explain some issues I've been having with it. Thank you!
Awesome stuff! I was wondering what all the different settings for inpainting were for. I've just started messing around with Stable Diffusion and watching your videos has definitely helped me to start figuring out what's possible. Thanks again!
I wish there was a tool for drawing, like, a heat map on an image, that allowed you to highlight areas where major changes are required and areas where minor changes would be better, in the context of the whole image. Inpainting with a mask is a great tool, but it will always fail to account for image context. There's basically no way to get SD to make a second cup like the first one, for example (short of exporting to Photoshop or something). There's a tool in DALL-E where you can choose different mask colors and assign each color to specific idea, and I feel like that would be really useful if you could also assign weights to those colors.
I thought maybe the value of the mask would affect how much is changed, so I did some experimenting. I created masks with values going from black to gray to white. In the end, it didn't matter. At some point stable diffusion switches from off to on, so there's no middle ground as far as I could tell.
I did this before but always fucked up, what was mainly because I still described the entire picture and didn't fit the resolution to the area. Your video helped a lot thank you.
great tuto : when i use the inpaint for my own face trained with dreambooth and the model protogen infinity, i usaully inpainting after the first result, with à 0.45 denoise.
I thought Sebastian was a bad teacher at first, but then he told a dad joke... That's it, that's all I've got. Thanks again for a good tutorial 😁👍 i have also been using the regular model for inpainting, feels like more often then not the corresponding inpaint model is to truncated or seems to do the exact same thing as the regular one (or im just using it wrong).
Thanks. BTW - I've used inpaint sketch very little. A few times I tried, it was VERY laggy. Can't be the GPU and I've got the latest drivers too. For you it seemed to be working OK.
*My favorite Swedish Stable Diffusion Oracle,* I have two humble questions: *[1]* Is there any good deepfake face-swap plug-in (extension) for video in Stable Diffusion? *[2]* This video concerns inpainting for images. Is there inpainting extensions for video as well in SD? Now I'm still a newbie so my questions might have obvious answers, nonetheless I trust the Oracle to guide me.
thank you so much for this tutorial and going over many important things here. i have a question, hope you don't mind. at 10:00 , this is exactly how i would like to work on this, the sketch tools are extremely simple, how would this process go that you mention at 10:00 ? i am a pretty good artist, i would love to draw in an almost good cup there, and then have the ai merge/blend it in better with the rest of the image. or if i am lazy, i would love to be able to concept bash some stuff together, and have the ai fix it up.
amazing explanation and video also a masterclass in efficient teaching! Thanks. i am only facing an issue with the inpaint sketch where the whole UI gets laggy and slow, only with the sketch option, any potential causes cross your mind?
This was cool! Thank you. Do you have a vid that is focused on img2img (not inpainting)? Or a vid on outpainting would be very cool, too! One last vid I would like to see is how to craft txt2img prompts that avoid common defects, such as cropped heads, double body parts, Heterochromia, crossed-eyes, and other stable diffusion oddities.
I use invoke Ai when I need inpainting or outpainting if at all possible (lora support coming in a few days means a lot of the cases I can't right now soon will be possible). Much better interface. Hoping I can switch over entirely once the migration to nodes makes addons possible.
Is there a way to manage skin tone fixes? For example, in yours, the face is slightly off tone to the rest/untouched parts. How do you go about fixing tone/brightness/contrast/etc when you like what it is the seed produced, but the finer details keep it from blending together as seamlessly as you're trying for?
That was insightful. Thanks, pase of the video was good, never i felt lagging behind. Yet i can not not notice, how come your SD so fast? Mine generates 10 times slower, is there any guides on settings for automatic1111? I have 8 gigs VRAM, Nvidia 1080
informative, but i do wish you said what mask blur did as you set it as well and only masked padding, pixels, i could see the result but not understand the why, as for the rest it was very nice
Mask blur changes the blur of the mask edge, increasing or decreasing it. Padding just gives a larger area to work with to adapt for the resolution you want in the render.
The padding is useful to give the AI information about the outside when it should change the inside. Denoise + Padding balance is needed when you encounter that AI inpaints whole body inside the face mask especially at higher resolution. i set mine at about 188 generating at 1072 x 1400
I'm not sure I understand the purpose of latent noise. If you're setting denoising strength to 1.0, doesn't that mean that the resulting image has nothing in common with what was inside of the mask, and it wouldn't matter which mask fill option you chose?
I made an image of a man in a blue suit. It's like a portrait. I then went into img2img > inpaint and made his entire suit black with the marker tool. I have most of the settings the same as Seb, including original for masked content and inpaint area is only masked. Sampling steps at 25, Euler a, batch count 5, cfg scale 10, denoising 0.8. positive prompt is red suit now. I click generate and I get 5 images on the right that are exactly the same as the original image on the left. I mean absolutely identical. What am I doing wrong?
I use inpaint sketch and got som hallo around the inpaint area, is there settings that I messed up or something? can you make some tutorials on that Sir? Edit: ahh it was blur setting I think I still confuse by it, is it for the mask or for the finish painting. idk xd 5:52 man this warning is what cost me a lot of the time, they need to fix this bug. 6:00 onwards. One thing that I think you know, but if you don't. Then If you want Inpaint something that already exists in the painting, it's better to make it a "Whole picture" so it can pick colors and existing concepts that have already been there so it is more cohesive in terms of color and aesthetics. and then after you can refine the details with "Only masked". or even something new it's better to take the whole picture and then work on the details in the "Only Masked" area.
what do you have installed that lets you in paint with color and specifically being able to pan over the image to select the color of whatever youre hovered over?
Glad to see you're hanging around 2.1 models lately; that's where I'm at myself lately, and I'm convinced the community is snoozing on its potential. It's much faster, looks better by default, and (once you get used to it) is honestly easier to prompt for. Got any favorites for recommendation?
Anyone on AMD has the problem of inpainting not doing anything at all to the picture? It renders and processes but the result just looks identical every single time, there is no change what so ever. If someone knows a fixx I would be grateful.
I would love a more detailed tutorial regarding the cup situation in the video. Many times I try to use inpainting I generate some unsatisfactory results that do not blend well with the image at all with some having a clearly different art style, lighting and size. It just screams 'out of place' and I kinda give up after some time and go back to rolling the dice on img2img hoping for a more detailed or pleasing image.
Inpainting is awesome but super finnicky. Really takes lot of iterations. But one thing I like using inpainting for is different facial expressions for a video game or visual novel.
Hi Sebastian. Thank you for your videos. I have a question. How can I create a workspace with interfaces like you have? Where can I download extra stuff to get more settings for my art? Thank you
The question I have when you jumped down the second cup rabbit hole is, can you just select or mask the existing coffee cup and make a copy to insert in the image or would that be something I would export the project to Photoshop for and play with over there?
Is it possible to have a image based mask rather drawing the mask. I want to be able the render out my product and iterate different backgrounds. By also rendering an alpha mask, could I use the black and white image as a mask so that the product is unaffected?
Wow, thx ! I used a lot of inpainting for my newest video but still think it looks a little bit off in some places (e.g. the Maggot Robby faces). Just did a quick test with adjusted width/height to the face properties and it yields waaaay better results. I think the inpaint sketch is tricky, most of the time it looks really off. Much effort to repair it after an object is placed. btw. loved your Bob Ross style video, you really could open a second channel and just do that there
Color picker: Seb how did you replace the default color picker in windows? The default color tool in windows does not have a picker just manual color settings. The one you use had. Pls reply. cheers, Jan
Do you have a fix for the Inpaint Canvas disappearing with the image i'm trying to edit? The three little lines on the bottom right corner don't show and i cant drag out the canvas, the window just disappears, reloading the page brings it back but doesnt fix the issue.
I found out what it was; apparently when you use 1111 in Firefox it will load a different colourpicker that is extreme limited (can't eye drop pick from the image for instance). So you kind of have to open it in Chrome instead for it to work
love your videos man, thanks for the help! One issue im struggling with is that inpaint sketching seems to freeze every time I try to use it and then I have to reload the UI or cancel the image for A1111 to start working again,d you know what might be the issue?
My inpainting seems to be broken for some reason... when I click to inpaint, the entire image has a purple hue to it... never had this issue before. Anyone else run into this problem?
Hello! I installed Stabel Diff. based on your tutorials, texttoimg and imgtoimg works fine, controlnet all good. I got a problem that inpaintig doesnt work.Where i mask it doesnet change or it gets a blurry transtapernt ish overlay or it fills with 1 colror and thats it. I tryed to look in redit forums discussions but cant find any soulution. Do you have any suggestion?
Early access to videos as a Patreon supporter www.patreon.com/sebastiankamph
I immediately scrolled to comments after coffee cup fiasko and I wasn't disappointed. This community is so great, you can learn so much, so fast.
A much better solution when using "Masked Only" is to place a tiny dot of masking on or near content of the image that gives your masked region and prompt some context. What happen with Masked Only is that the image is cropped to fit just the masked part so many times it loses context to fit the new generation into. So if you want to in paint a hand, add a dot of mask further up the arm so it knows how it should be positioned or sized to match the rest of the arm. In your example adding a tiny dot of mask to the other coffee cup would have produced a better result. Simply because the cropping will include that contextual information. You have to leave in enough for the AI to work with.
i do the same , just mask only , and add 2 points around the part that i want to change so the ia has something to work
Dot Contexting would make a good topic for an instruction video.
How do I remove the clothes, it is not working for me.
@@loveutube04😂
@@loveutube04 get cloth adjuster lora, works like a charm :)
Hey Seb, just a little tip. Once you're done inpainting, you can put the final image into img2img with very low denoise to remove inpainting blurs, shadows, etc. for a smoother result.
I thought of the same as I was having trouble with an image that is a painting, although img2img has very little denoise strength it still manages to change small crucial details like the eyes. I wish there was a better solution to this, since the "only mask" option leaves a much smoother surface than the rest of the image and makes it look out of place. It's not so much trouble when it's a photo, but in other kind of images like paintings the "blur" of the fix stands out more. :S
how do you use this interface, that one he's showing in the video?
@@spanko685 It's Automatic1111, which is relatively easy to install. Just make sure you follow the steps exactly, especially when it comes to the Python version stated. Too recent is just as bad as too old, because some of the components are very finicky. There are tons of videos and websites covering it specifically, as it's the most popular and probably still the most feature rich UI.
@@tannhausergate7162 thx so much
how? can you elaborate plz
The reason that the coffee cup doesn't fit well within the image is because the render box for the inpaint area is such a small part of the image - anything generated is done so only within the context of a) what is not denoised i.e. the original image and b) what SD can actually see (within the render box). For high denoising strength you generally want a larger render box, otherwise it's easy to lose context.
But what if you only want to change a small area? No problem! The render area is created as a bounding box that contains all the inpaint area you've selected - so, you can increase it by adding tiny dots of inpaint area to the scene. If you make them very small then whatever is behind it will generally be unchanged once rendered, so only the main area you've selected will be altered - but the dots will still count towards the bounding box. In the example given, I would put one dot above and to the left of the first coffee cup, and one at the bottom and to the right of the table. That way, what is rendered will probably adher more closely to both the focus (the blur on the coffee cup) and orientation and size of the table.
For lower denoising strength (0.5 or below I'd say) it will generally be able to glean the context from what remains of the original image, but for anything higher I get much better results with this method.
Clever! Nice tip 🌟
Is this different from simply increasing the mask padding to include that larger area?
@@EmperorZ19 That was my thought exactly. Isn't that exactly what the "Only masked padding, pixels" slider is for?
@@morganandreason Mask padding is exactly what this is for but it only goes so big. InvokeAI has a much better solution with an actual bounding box you can move around.
@@blisterfingers8169 I can see how a movable bounding box is a lot better, yes. Hope it comes to Auto1111.
The reason you were struggling with the coffee cup is because "full image" is not just to keep the inpaint part the same resolution, but it tells the inpaint engine to look at the entire picture when drawing. So, you will get a perfectly sized cup, and correct sunlight on the cup coming from the window, for example. But with "masked only", it only looks at the area of the cup, and that's why it won't fit the scene as well.
I love you! In a week you have turned me from someone who had zero experience using AI tools to being a pro using stable diffusion. Thank you for all the amazing tutorials!
Happy to help!
I'm all for self-deprecating humo(u)r, but in all seriousness, you are a very capable teacher. Thank you.
Thank you! 😃
Hi, only got into Stable Diffusion a couple of weeks ago and hadn't had much luck with Inpainting, this tutorial made a lot of sense and got much better results with my first Inpaint after watching this. Thanks :)
1:09 - The scroll works by holding down Shift and scrolling the mouse wheel
Thank you for the knowledge. BTW, Did you hear about the artist who took things too far? Guess he didn't know where to draw the line.
😂🤣
Alright I've subbed? joined? I dont know what the term is-
I'm giving myself 2 full time weeks to really pick up on stable diffusion and you're helping to launch me. Thank you.
Thank you kindly for your support 😘 After binging my content you should be well and ready to compete with almost anyone in SD! 🌟
@@sebastiankamph The smallish content sizes really help with being able to scrub back and forth in a video- thank you :)
Awesome tutorial. Understanding the difference for masked content "original" and "latent noise" helped me so much. As others had already mentioned, making the inpaint area bigger also helps. When i Inpaint Body parts like the upper torso, I often mask the start of arms and the neck as well, so that inpaint understands in which direction the body is moving.
I never imagined inpainting would be so simple, which is why I'm here. I was like nice I can draw black lines on stuff how does that help me? I genuinely thought you had to be fluent with digital painting for this. Thanks for the tutorial dude.
Happy to help, glad you liked it! Tell a friend 🌟😊
@@sebastiankamph I do every time I learn something new in SD :)
It's like ASMR for SD. Thank you!
Happy you liked it, welcome aboard!
This was helpful Seb, thanks. Inpainting has always been a bit of a mystery.
I hope it will be a mystery no more!
Thank you for these tutorials and sharing your process. They've been a huge help for me as a beginner to these tools.
You're very welcome! 😊😊
@@sebastiankamph I would love to see OUTpainting tutorial in Automatic1111 too 🐣
I am new at the whole impainting topic and this video helped me a lot to get an overview of the possibilities. Many thanks ❤
Glad it was helpful! Thank you for the kind words 🌟
Cool, Thanks for the simple perfect Instructions 💪💪
Seb I could listen to your voice for hours
Fine, I'll leave a comment after watching like a half dozen of your videos. Well paced, thorough explanations, technical expertise on the matter. Ok, then, I guess I have to thank you.
Appreciate it! Community engagement help more people see my videos, which in turn help me 😊
Thanks!
Happy to help! Thank you so much for your support :)
This is a great tutorial, while I worked through the pain of learning this myself. I know this would have helped me hugely. Bonus, I did learn a bit more about the blur settings. Thank you very much.
You're very welcome! Happy you learned something new 😊
Bro doing the lord's work. Thanks for your awesome tutorials
Welcome aboard sailor!
I've always found inpainting to be a bit hit and miss. That was useful. Thanks.
Very useful video as usual. I've been struggling to get this to work. Getting the hand now. Including help from the comments in this section. Thanks folks.
Happy you're getting it to work now! 🌟
Damn thanks for the zooooom! Was wondering when we gonna get it
Very nice video. Inpainting in SD is not intuitive but your simple, to the point instructions helped to explain some issues I've been having with it. Thank you!
I'm glad it helped you! :)
Thanks the guide and thanks to the commenters for extra tips.
Awesome stuff! I was wondering what all the different settings for inpainting were for. I've just started messing around with Stable Diffusion and watching your videos has definitely helped me to start figuring out what's possible. Thanks again!
Marvelous tutorials like usual, keep going the good work!
Glad you like them! 🌟
9:25 switch to "whole picture" instead of only masked for that particular task
Such a poet!
This was a really good and condensed tutorial, thanks.
I wish there was a tool for drawing, like, a heat map on an image, that allowed you to highlight areas where major changes are required and areas where minor changes would be better, in the context of the whole image. Inpainting with a mask is a great tool, but it will always fail to account for image context. There's basically no way to get SD to make a second cup like the first one, for example (short of exporting to Photoshop or something). There's a tool in DALL-E where you can choose different mask colors and assign each color to specific idea, and I feel like that would be really useful if you could also assign weights to those colors.
I thought maybe the value of the mask would affect how much is changed, so I did some experimenting. I created masks with values going from black to gray to white. In the end, it didn't matter. At some point stable diffusion switches from off to on, so there's no middle ground as far as I could tell.
So basically, a heatmap-driven denoise parameter
Doesn't "ControlNet" or "segment anything" used with "inpaint anything" extensions do this?
bookmarked, liked and subscribed. mastering these parts are so important and so much to try and fail at!
Welcome aboard! You'll find lots of valuable resources here, I hope 😉
This is soooo good. Loving the tutorials and the dad jokes man !!
Thank you! 😘😊
I did this before but always fucked up, what was mainly because I still described the entire picture and didn't fit the resolution to the area. Your video helped a lot thank you.
great tuto : when i use the inpaint for my own face trained with dreambooth and the model protogen infinity, i usaully inpainting after the first result, with à 0.45 denoise.
I thought Sebastian was a bad teacher at first, but then he told a dad joke...
That's it, that's all I've got.
Thanks again for a good tutorial 😁👍 i have also been using the regular model for inpainting, feels like more often then not the corresponding inpaint model is to truncated or seems to do the exact same thing as the regular one (or im just using it wrong).
Thanks. BTW - I've used inpaint sketch very little. A few times I tried, it was VERY laggy. Can't be the GPU and I've got the latest drivers too. For you it seemed to be working OK.
Nice video. Finally worked, thanks!
A tutorial for inpainting in ComfyUI would be good 😉 Your SDXL Workflow file is the best I've tried so far
love learning these skills, thx for the vid.
I think that using something like photoshop in-between the inpainting steps is key to getting great results. 😮
Yes, it surely helps a lot. Photobashing can yield great images.
*My favorite Swedish Stable Diffusion Oracle,* I have two humble questions:
*[1]* Is there any good deepfake face-swap plug-in (extension) for video in Stable Diffusion?
*[2]* This video concerns inpainting for images. Is there inpainting extensions for video as well in SD?
Now I'm still a newbie so my questions might have obvious answers, nonetheless I trust the Oracle to guide me.
Man, I wish I could help you. I'd love to be the Swedish oracle but I don't know of anything that can help you. Let me know if you find it.
I have done many times of inpainting but never understand the options. Thank you.
thank you so much for this tutorial and going over many important things here. i have a question, hope you don't mind.
at 10:00 , this is exactly how i would like to work on this, the sketch tools are extremely simple, how would this process go that you mention at 10:00 ? i am a pretty good artist, i would love to draw in an almost good cup there, and then have the ai merge/blend it in better with the rest of the image. or if i am lazy, i would love to be able to concept bash some stuff together, and have the ai fix it up.
amazing explanation and video also a masterclass in efficient teaching! Thanks.
i am only facing an issue with the inpaint sketch where the whole UI gets laggy and slow, only with the sketch option, any potential causes cross your mind?
This was cool! Thank you.
Do you have a vid that is focused on img2img (not inpainting)?
Or a vid on outpainting would be very cool, too!
One last vid I would like to see is how to craft txt2img prompts that avoid common defects, such as cropped heads, double body parts, Heterochromia, crossed-eyes, and other stable diffusion oddities.
Thank you for the information.
Fantastic tutorial man, informative and simple to follow
Cool stuff, you're a great teacher!🎖
thank you for making these videos! You are a great teacher!
You're very welcome! 😊
Great tutorial! Thank you
Amazed that AI nailed the starbucks logo
I use invoke Ai when I need inpainting or outpainting if at all possible (lora support coming in a few days means a lot of the cases I can't right now soon will be possible). Much better interface. Hoping I can switch over entirely once the migration to nodes makes addons possible.
me2
Is there a way to manage skin tone fixes? For example, in yours, the face is slightly off tone to the rest/untouched parts. How do you go about fixing tone/brightness/contrast/etc when you like what it is the seed produced, but the finer details keep it from blending together as seamlessly as you're trying for?
That's because you should use Inapinting models, they not only ensure the edges/lines match, but it also does a correct light/colors.
What good inpainting models are there? I only see like 5 on civitai
@@ddrguy3008 You can create your own Inpaint Models too using the Checkpiont merger in Autos GUI.
That was insightful. Thanks, pase of the video was good, never i felt lagging behind. Yet i can not not notice, how come your SD so fast? Mine generates 10 times slower, is there any guides on settings for automatic1111? I have 8 gigs VRAM, Nvidia 1080
Finally! Canvas Zoom!!
Best feature! 🌟
how 😭😭😭😭
informative, but i do wish you said what mask blur did as you set it as well and only masked padding, pixels, i could see the result but not understand the why, as for the rest it was very nice
Mask blur changes the blur of the mask edge, increasing or decreasing it. Padding just gives a larger area to work with to adapt for the resolution you want in the render.
The padding is useful to give the AI information about the outside when it should change the inside. Denoise + Padding balance is needed when you encounter that AI inpaints whole body inside the face mask especially at higher resolution. i set mine at about 188 generating at 1072 x 1400
I'm not sure I understand the purpose of latent noise. If you're setting denoising strength to 1.0, doesn't that mean that the resulting image has nothing in common with what was inside of the mask, and it wouldn't matter which mask fill option you chose?
¿Do you know how to use controlnet with inpainting? i cant get good results using also controlnet at the same time
This is exactly what I need. Thank you.
Happy to help!
did you get a new camera? the video looks very crispy, great video as always
Same camera. Video is in 1440 now tho (however some scenes upscaled from 1080). Good to hear it looks better though :D
I made an image of a man in a blue suit. It's like a portrait. I then went into img2img > inpaint and made his entire suit black with the marker tool. I have most of the settings the same as Seb, including original for masked content and inpaint area is only masked. Sampling steps at 25, Euler a, batch count 5, cfg scale 10, denoising 0.8. positive prompt is red suit now. I click generate and I get 5 images on the right that are exactly the same as the original image on the left. I mean absolutely identical.
What am I doing wrong?
Same problem, no matter what I do I get the exact same output. Anyone care to help?
I use inpaint sketch and got som hallo around the inpaint area, is there settings that I messed up or something?
can you make some tutorials on that Sir?
Edit: ahh it was blur setting I think I still confuse by it, is it for the mask or for the finish painting. idk xd
5:52 man this warning is what cost me a lot of the time, they need to fix this bug.
6:00 onwards. One thing that I think you know, but if you don't. Then If you want Inpaint something that already exists in the painting, it's better to make it a "Whole picture" so it can pick colors and existing concepts that have already been there so it is more cohesive in terms of color and aesthetics. and then after you can refine the details with "Only masked". or even something new it's better to take the whole picture and then work on the details in the "Only Masked" area.
what do you have installed that lets you in paint with color and specifically being able to pan over the image to select the color of whatever youre hovered over?
Glad to see you're hanging around 2.1 models lately; that's where I'm at myself lately, and I'm convinced the community is snoozing on its potential. It's much faster, looks better by default, and (once you get used to it) is honestly easier to prompt for. Got any favorites for recommendation?
2.1 models crash for me
Anyone on AMD has the problem of inpainting not doing anything at all to the picture? It renders and processes but the result just looks identical every single time, there is no change what so ever. If someone knows a fixx I would be grateful.
I would love a more detailed tutorial regarding the cup situation in the video. Many times I try to use inpainting I generate some unsatisfactory results that do not blend well with the image at all with some having a clearly different art style, lighting and size. It just screams 'out of place' and I kinda give up after some time and go back to rolling the dice on img2img hoping for a more detailed or pleasing image.
That's because you should use Inapinting models, they not only ensure the edges/lines match, but it also does a correct light/colors.
thanks for the tutorial. but what about if the face is good but the body is messed up? How can we do it the opposite way around?
Paint the face and then press the button to use the inverted mask.
Is there any way to make automatic detect cloth area and using prompt change the clothes only not real face in image.
7:35 This is so like Bb Ross' happy little accidents 😂
Inpainting is awesome but super finnicky. Really takes lot of iterations. But one thing I like using inpainting for is different facial expressions for a video game or visual novel.
Hi Sebastian. Thank you for your videos. I have a question. How can I create a workspace with interfaces like you have? Where can I download extra stuff to get more settings for my art? Thank you
you're my new asmr
Nice! 😘
thanks! what are the system requirements to run this ? or does this run on the cloud?
Can you change models while doing this? Like use a different model for a sketch or to re-render a face?
The question I have when you jumped down the second cup rabbit hole is, can you just select or mask the existing coffee cup and make a copy to insert in the image or would that be something I would export the project to Photoshop for and play with over there?
As I understand it, the latent noise is intended to preserve the color scheme of the picture. Or not.
If we wanna fill the face with a lora or dreamboth model which denoising strength is ideal?
How can i change the colors on brush? Can’t seem to find the settings for that
Cool, but how do you remove the coffee cup so there is nothing there?
How do you get the canvas zoom to work? I've got the extension installed but spinning the mouse wheel while I'm over the image doesn't zoom.
Is it possible to have a image based mask rather drawing the mask. I want to be able the render out my product and iterate different backgrounds. By also rendering an alpha mask, could I use the black and white image as a mask so that the product is unaffected?
THX again Seb, very usefull :))
A quick question, on your top quick settings tab for Add Lora how did you set it to none?
Wow, thx !
I used a lot of inpainting for my newest video but still think it looks a little bit off in some places (e.g. the Maggot Robby faces). Just did a quick test with adjusted width/height to the face properties and it yields waaaay better results.
I think the inpaint sketch is tricky, most of the time it looks really off. Much effort to repair it after an object is placed.
btw. loved your Bob Ross style video, you really could open a second channel and just do that there
Color picker: Seb how did you replace the default color picker in windows? The default color tool in windows does not have a picker just manual color settings. The one you use had. Pls reply.
cheers, Jan
Fixed, just change web browser to Edge
no matter what I do cant make anything new it just blurs the are I inpaint! any solution
What does the nfixer negative prompt do, exactly?
Thank u! amazing tutorial
You're welcome! 🌟
Do you have a fix for the Inpaint Canvas disappearing with the image i'm trying to edit? The three little lines on the bottom right corner don't show and i cant drag out the canvas, the window just disappears, reloading the page brings it back but doesnt fix the issue.
what is the checkpoint name at 0:36
how do you use the scroll? i installed but nothing happens
What kind of extension do you use for your colour picking? Because it is different from what I see onscreen,
I found out what it was; apparently when you use 1111 in Firefox it will load a different colourpicker that is extreme limited (can't eye drop pick from the image for instance). So you kind of have to open it in Chrome instead for it to work
love your videos man, thanks for the help! One issue im struggling with is that inpaint sketching seems to freeze every time I try to use it and then I have to reload the UI or cancel the image for A1111 to start working again,d you know what might be the issue?
My inpainting seems to be broken for some reason... when I click to inpaint, the entire image has a purple hue to it... never had this issue before. Anyone else run into this problem?
Hello!
I installed Stabel Diff. based on your tutorials, texttoimg and imgtoimg works fine, controlnet all good. I got a problem that inpaintig doesnt work.Where i mask it doesnet change or it gets a blurry transtapernt ish overlay or it fills with 1 colror and thats it. I tryed to look in redit forums discussions but cant find any soulution. Do you have any suggestion?